Published in: Martine Ceberio and Vladik Kreinovich (eds.), Uncertainty, Constraints, and Decision Making, Springer, Cham, Switzerland, 2023, pp. 363-370.
Sometimes, a measuring instrument malfunctions, and we get a value which is very different from the actual value of the measured quantity. Such values are known as outliers. Usually, data processing considers situations when there are relatively few outliers. In such situations, we can delete most of them. However, there are situations when most results are outliers. In this case, we cannot produce a single value which is close to the actual value, but we can generate several values one of which is close. Of course, all the values produced by the measuring instrument(s) satisfy this property, but there are often too many of them, we would like to compress this set into a smaller-size one. In this paper, we prove that irrespective of the size of the original sample, we can always compress this sample into a compressed sample of fixed size.
File UTEP-CS-21-110 in pdf
After each measurement, we get a set of possible values of the measured quantity. This set is usually an interval, but sometimes it is a union of several disjoint intervals -- i.e., a multi-interval. The results of measuring the same quantity are consistent if the corresponding sets intersect. It is known that for any family of intervals, if every two intervals from this family have a non-empty intersection, then the whole family has a non-empty intersection. We use a known result from combinatorial convexity to show that for multi-intervals, even if we require that every k multi-intervals from a given family have a common element, this still does not necessarily imply that the whole family is consistent.
File UTEP-CS-21-109 in pdf
It is a good programming practice to include runtime checks called assertions in the code to check assumptions and invariants. Assertions are said to be often most effective when they encode design decisions and constraints. In this paper, we show our preliminary work on translating design constraints to assertions for mobile apps. Design properties and constraints are specified formally in the Object Constraint Language (OCL) and translated to executable assertions written in Dart, the language of the Flutter cross-platform framework. We consider various language and platform-specific features of OCL, Dart, and Flutter. In our approach, assertions are enabled only in debug mode and removed from the production code. It is important to reduce the memory footprint of a mobile app as the memory on a mobile device is a limited resource.
File UTEP-CS-21-108 in PDF
People are often not 100% confident in their opinions. In the computers, absolute confidence is usually described by 1, absolute confidence in the opposite statement by 0. It is therefore reasonable to estimate intermediate degree of confidence by a number between 0 and 1. Many people cannot indicate an exact number corresponding to their degree of confidence, so a natural idea is to allow them to mark an interval of possible values. Since each person is not fully confident, a natural way to get a more definite conclusion is to combine opinions of several people. In this combination, it is reasonable to take the opinions of more certain people with higher weight. For this purpose, we need to assign, to each subinterval of the interval [0,1], a measure of certainty, so that intervals [0,0] and [1,1] -- corresponding to absolute certainty -- will get measure 1 and the interval [0,1] corresponding to absolute uncertainty will get measure 0. People have similar trouble marking the exact endpoints of the interval-valued degree; it is therefore reasonable to select a certainty measure that will be the least sensitive -- i.e., the most robust -- to inevitable small changes in these endpoints. In this paper, we find the most robust certainty measure. It turns out to be exactly the measure that has been shown to be empirically successful in analyzing people's opinions.
File UTEP-CS-21-107 in pdf
Published in Computacion y Sistemas, 2021, Vol. 25, No. 4, pp. 775–781.
In recent years many papers have been devoted to the analysis and applications of negations of finite probability distributions (PD), first considered by Ronald Yager. This paper gives a brief overview of some formal results on the definition and properties of negations of PD. Negations of PD are generated by negators of probability values transforming element-by-element PD into a negation of PD. Negators are non-increasing functions of probability values. There are two types of negators: PD-independent and PD-dependent negators. Yager's negator is fundamental in the characterization of linear PD-independent negators as a convex combination of Yager's negator and uniform negator. Involutivity of negations is important in logic, and such involutive negator is considered in the paper. We propose a new simple definition of the class of linear negators generalizing Yager's negator. Different examples illustrate properties of negations of PD. Finally, we consider some open problems in the analysis of negations of probability distributions.
File UTEP-CS-21-106 in pdf
Published in: Enrico Zio, Panos Pardalos, and Mahdi Fathi (eds.), Handbook of Smart Energy Systems, Springer, 2023, pp. 195-214.
The main objective of a smart energy system is to make control decisions that would make energy systems more efficient and more reliable. To select such decisions, the system must know the consequences of different possible decisions. Energy systems are very complex, they cannot be described by a simple formula, the only way to reasonably accurately find such consequences is to test each decision on a simulated system. The problem is that the parameters describing the system and its environment are usually known with uncertainty, and we need to produce reliable results -- i.e., results that will be true for all possible values of the corresponding parameters. In this chapter, we describe techniques for performing reliable simulations under such uncertainty.
File UTEP-CS-21-105 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2022, pp. 83-87.
In many practical situations, we have a large number of objects, too many to be able to thoroughly analyze each of them. To get a general understanding, we need to select a representative sample. For us, this problem was motivated to analyze the possible effect of an earthquake on buildings in El Paso, Texas. In this paper, we provide a reasonable formalization of this problem, and provide a feasible algorithm for solving thus formalized problem.
File UTEP-CS-21-104 in pdf
Published in Journal of Multiple Valued Logic and Soft Computing, 2022, Vol. 39, pp. 445–462.
In the traditional fuzzy logic, we can use "and"-operations (also known as t-norms) to estimate the expert's degree of confidence in a composite statement A&B based on his/her degrees of confidence d(A) and d(B) in the corresponding basic statements A and B. But what if we want to estimate the degree of confidence in A&B&C in situations when, in addition to the degrees of estimate d(A), d(B), and d(C) of the basic statements, we also know the expert's degrees of confidence in the pairs d(A&B), d(A&C), and d(B&C)? Traditional "and"-operations can provide such an estimate -- but only by ignoring some of the available information. In this paper, we show that, by going beyond the traditional "and"- and "or"-operations, we can find a natural estimate that takes all available information into account -- and thus, hopefully, leads to a more accurate estimate.
Original file UTEP-CS-21-103 in pdf
Updated version UTEP-CS-21-103a in pdf
In our previous paper, we showed that a simplified probabilistic approach to interval uncertainty leads to the known notion of a united solution set. In this paper, we show that a more realistic probabilistic analysis of data fitting under interval uncertainty leads to another known notion -- the notion of a tolerable solution set. Thus, the notion of a tolerance solution set also has a clear probabilistic interpretation. Good news is that, in contrast to the united solution set whose computation is, in general, NP-hard, the tolerable solution set can be computed by a feasible algorithm.
File UTEP-CS-21-102 in pdf
To appear in: Nguyen Ngoc Thach, Nguyen Duc Trung, Doan Thanh Ha, and Vladik Kreinovich, Artificial Intelligence and Machine Learning for Econometrics: Applications and Regulation (and Related Topics), Springer, Cham, Switzerland.
At present, the most efficient machine learning techniques are deep neural networks. In these networks, a signal repeatedly undergoes two types of transformations: linear combination of inputs, and a non-linear transformation of each value v -> s(v). Empirically, the function s(v) = max(v,0) -- known as the rectified linear function -- works the best. There are some partial explanations for this empirical success; however, none of these explanations is fully convincing. In this paper, we analyze this why-question from the viewpoint of uncertainty propagation. We show that reasonable uncertainty-related arguments lead to another possible explanation of why rectified linear functions are so efficient.
File UTEP-CS-21-101 in pdf
In the physical space, we define distance between the two points as the length of the shortest path connecting these points. Similarly, in space-time, for every pair of events for which the event a can causally effect the event b, we can define the longest proper time t(a,b) over all causal trajectories leading from a to b. The resulting function is known as kinematic metric. In practice, our information about all physical quantities -- including time -- comes from measurement, and measurements are never absolutely precise: the measurement result V is, in general, different from the actual (unknown) value v of the corresponding quantity. In many cases, the only information that we have about each measurement error dv = V -- v is the upper bound D on its absolute value. In such cases, once we get the measurement result V, the only information we gain about the actual value v is that v belongs to the interval [V -- D, V + D]. In particular, we get intervals [L(a,b), U(a,b)] containing the actual values of the kinematic metric. Sometimes, we underestimate the measurement errors; in this case, we may not have a kinematic metric contained in the corresponding narrowed intervals -- and this will be an indication of such an underestimation. Thus, it is important to analyze when there exists a kinematic metric contained in all the given intervals. In this paper, we provide a necessary and sufficient condition for the existence of such a kinematic metric. For cases when such a kinematic metric exists, we also provide bounds on its values.
File UTEP-CS-21-100 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 125-128.
The usual formulas for gauging the quality of a classification method assume that we know the ground truth, i.e., that for several objects, we know for sure to which class they belong. In practice, we often only know this with some degree of certainty. In this paper, we explain how to take this uncertainty into account when gauging the quality of a classification method.
File UTEP-CS-21-99 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 117--120.
In the traditional neural networks, the outputs of each layer serve as inputs to the next layer. It is known that in many cases, it is beneficial to also allow outputs from pre-previous etc. layers as inputs. Such networks are known as residual. In this paper, we provide a possible theoretical explanation for the empirical success of residual neural networks.
File UTEP-CS-21-98 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 233-237.
Reasonably recently, a new efficient method appeared for solving complex non-linear differential equations (and systems of differential equations). In this method -- known as Model Order Reduction (MOR) -- we select several solutions, and approximate a general solution by a linear combination of the selected solutions. In this paper, we use the known explanation for efficiency of neural networks to explain the efficiency of MOR techniques.
File UTEP-CS-21-97 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 251-256.
In general, computing the range of a quadratic function on given intervals is NP-hard. Recently, a feasible algorithm was proposed for computing the range of a specific quadratic function -- square of the modulus of a Fourier coefficient. For this function, the rank of the quadratic form -- i.e., the number of nonzero eigenvalues -- is 2. In this paper, we show that this algorithm can be extended to all the cases when the rank of the quadratic form is bounded by a constant.
File UTEP-CS-21-96 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 285-288
In many practical situations, we need to estimate our degree of belief in a statement "A and B" when the only thing we know are the degrees of belief a and b in combined statements A and B. An algorithm for this estimation is known as an "and"-operation, or, for historical reasons, a t-norm. Usually, "and"-operations are selected in such a way that if one of the statements A or B is false, our degree of belief in "A and B" is 0. However, in practice, this is sometimes not the case: for example, an ideal faculty candidate must satisfy many properties -- be a great teacher, and be a wonderful researcher, and be a great mentor, etc. -- but if one of these requirements is not satisfied, this candidate may still be hired. In this paper, we show how to describe the corresponding commonsense "or"-operations.
File UTEP-CS-21-95 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 189-192.
In situations when we have a perfect knowledge about the outcomes of several situations, a natural idea is to select the best of these situations. For example, among different investments, we should select the one with the largest gain. In practice, however, we rarely know the exact consequences of each action. In some cases, we know the lower and upper bounds on the corresponding gain. It has been proven that in such cases, an appropriate decision is to use Hurwicz optimism-pessimism criterion. In this paper, we extend the corresponding results to the cases when we only know an upper bound or a lower bound.
File UTEP-CS-21-94 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2022, pp. 100-108.
In most practical applications, we approximate the spatial dependence by smooth functions. The main exception is geosciences, where, to describe, e.g., how the density depends on depth and/or on spatial location, geophysicists divide the area into regions on each of which the corresponding quantity is approximately constant. In this paper, we provide a possible explanation for this difference.
File UTEP-CS-21-93 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 173-176.
It is a known empirical fact that people overestimate small probabilities. This fact seems to be inconsistent with the fact that we humans are the product of billions years of improving evolution -- and that we therefore perceive the world as accurately as possible. In this paper, we provide a possible explanation for this seeming contradiction.
File UTEP-CS-21-92 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 93-99.
The main idea behind a smart grid is to equip the grid with a dense lattice of sensors monitoring the state of the grid. If there is a fault, the sensors closer to the fault will detect larger deviations from the normal readings that sensors that are farther away. In this paper, we show that this fact can be used to locate the fault with high accuracy.
File UTEP-CS-21-91 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 177-179.
To elicit people's opinions, we usually ask them to mark their degree of satisfaction on a scale -- e.g., from 0 to 5 or from 0 to 10. Often, people are unsure about the exact degree: 7 or 8? To cover such situations, it is desirable to elicit not a single value but an interval of possible values. However, it turns out that most people are not comfortable with marking an interval. Empirically, it turned out that the best way to elicit an interval is to ask them to draw an oval whose intersection with the 0-to-10 line is the desired interval. Surprisingly, this seemingly more complex 2-D task is easier for most people that a seemingly simpler 1-D task of drawing an interval. In this paper, we provide a possible explanation of why eliciting an interval-related oval is more efficient than eliciting the interval itself.
File UTEP-CS-21-90 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 1, pp. 49-58.
When we know for sure which values are possible and which are not, we have crisp uncertainty -- of which interval uncertainty is a usual case. In practice, we are often not 100% sure about our knowledge, i.e., we have fuzzy uncertainty -- i.e., we have fuzzy knowledge, of which crisp is a particular case. Usually, general problems are more difficult to solve that most of their particular cases. It was therefore expected that processing fuzzy data is, in general, more computationally difficult than processing interval data -- and indeed, Zadeh's extension principle -- a natural formula for fuzzy computations -- looks very complicated. Unexpectedly, Zadeh-motivated 1978 paper by Hung T. Nguyen showed that fuzzy computations can be reduced to a few interval ones -- and in this sense, fuzzy and interval computations have, in effect, the same computational complexity. In this paper, we remind the readers about the motivations for (and proof of) this result, and show how and why in the last 35 years, this result was generalized in various directions.
File UTEP-CS-21-89 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 193-199.
Why do people become addicted, e.g., to gambling? Experiments have shown that simple lotteries, in which we can win a small prize with a certain probability, and not addictive. However, if we add a second possibility -- of having a large prize with a small probability -- the lottery becomes highly addictive to many participants. In this paper, we provide a possible theoretical explanation for this empirical phenomenon.
File UTEP-CS-21-88 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 181-187.
Among the most efficient characteristics of a probability distribution are its moments and, more generally, generalized moments. One of the most adequate numerical characteristics describing human behavior is expected utility. In both cases, the corresponding characteristic is the sum of results of applying appropriate nonlinear functions applied to individual inputs. In this paper, we provide a possible theoretical explanation of why such functions are efficient.
File UTEP-CS-21-87 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 93-98.
When designing software for self-driving cars, we need to make an important decision: When a self-driving car encounters an emergency situation in which either the car's passenger or an innocent pedestrian have a good change of being injured or even die, which option should it choose? This has been a subject of many years of ethical discussions -- and these discussions have not yet led to a convincing solution. In this paper, we propose a "conservative" (status quo) solution that does not require making new ethical decisions -- namely, we propose to limit both the risks to passengers and risks to pedestrians to their current levels, levels that exist now and are therefore acceptable to the society.
File UTEP-CS-21-86 in pdf
Published in: Oscar Castillo and Patricia Melin (eds.), New Perspectives on Hybrid Intelligent System Design Based on Fuzzy Logic, Neural Networks and Metaheuristics, Springer, 2023, pp. 465-473.
At present, the most efficient deep learning technique is the use of deep neural networks. However, recent empirical results show that in some situations, it is even more efficient to use "localized" learning -- i.e., to divide the domain of inputs into sub-domains, learn the desired dependence separately on each sub-domain, and then "smooth" the resulting dependencies into a single algorithm. In this paper, we provide theoretical explanation for these empirical successes.
File UTEP-CS-21-85 in pdf
Published in: Oscar Castillo and Patricia Melin (eds.), New Perspectives on Hybrid Intelligent System Design Based on Fuzzy Logic, Neural Networks and Metaheuristics, Springer, 2023, pp. 475-483.
Predictions are usually based on what is called laws of nature: many times, we observe the same relation between the states at different moments of time, and we conclude that the same relation will occur in the future. The more times the relation repeats, the more confident we are that the same phenomenon will be re-peated again. This is how Newton's laws and other laws came into being. This is what is called inductive reasoning. However, there are other reasonable approaches. For example, assume that a person speeds and is not caught. This may be repeated two times, three times -- but here, the more times this phenomenon is repeated, the more confident we become that next this, he/she will be caught. Let us call this anti-inductive reasoning. So which of the two approaches shall we use? This is an example of a question that we study in this paper.
File UTEP-CS-21-84 in pdf
Published in: Oscar Castillo and Patricia Melin (eds.), New Perspectives on Hybrid Intelligent System Design Based on Fuzzy Logic, Neural Networks and Metaheuristics, Springer, 2023, pp. 459-463.
At present, the most successful machine learning technique is deep learning, that uses rectified linear activation function (ReLU) s(x) = max(x,0) as a non-linear data processing unit. While this selection was guided by general ideas (which were often imprecise), the selection itself was still largely empirical. This leads to a natural question: are these selections indeed the best or are there even better selections? A possible way to answer this question would be to provide a theoretical explanation of why these selections are -- in some reasonable sense -- the best. This paper provides a possible theoretical explanation for this empirical fact.
File UTEP-CS-21-83 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 76-72.
We show that natural invariance ideas explain the empirical dependence on the pavement's lifetime on the stress level.
File UTEP-CS-21-82 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 167-171.
One of the motivations for Zadeh's development of fuzzy logic -- and one of the explanations for the success of fuzzy techniques -- is the empirical observation that as complexity rises, meaningful statements lose precision. In this paper, we provide a possible explanation for this empirical phenomenon.
File UTEP-CS-21-81 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 53-59.
Some researchers have few main ideas that they apply to many different problems -- they are called hedgehogs. Other researchers have many ideas but apply them to fewer problems -- they are called foxes. Both approaches have their advantages and disadvantages. What is the best balance between these two approaches? In this paper, we provide general recommendations about this balance. Specifically, we conclude that the optimal productivity is when the time spent on generating new ideas is equal to the time spent on understanding new applications. So, if for a researcher, understanding a new problem is much easier than generating a new idea, this researcher should generate fewer ideas -- i.e., be a hedgehog. Vice versa, if for a researcher, generating a new idea is easier than understanding a new problem, it is more productive for this person to generate many ideas -- i.e., to be a fox. For researchers for whom these times are of the same order, we provide explicit formulas for the optimal research strategy.
File UTEP-CS-21-80 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 73-81.
Among the main fundamental challenges related to physics and human intelligence are: How can we reconcile the free will with the deterministic character of physical equations? What is the physical meaning of extra spatial dimensions needed to make quantum physics consistent? and Why are we often smarter than brain-simulating neural networks? In this paper, we show that while each of these challenges is difficult to resolve on its own, it may be possible to resolve all three of them if we consider them together. The proposed possible solution is that human reasoning uses the extra spatial dimensions. This may sound weird, but in this paper, we explain that this solution is much more natural than how it sounds at first glance.
File UTEP-CS-21-79 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 83-91.
Plants play a very important role in ecological systems -- they transform CO2 into oxygen. It is therefore very important to be able to estimate the overall amount of live green vegetation in a given area. The most efficient way to provide such a global analysis is to use remote sensing, i.e., multi-spectral photos taken from satellites, drones, planes, etc. At present, one of the most efficient ways to detect, based on remote sensing data, how much live green vegetation an area contains is to compute the value of the normalized difference vegetation index (NDVI). In this paper, we provide a theoretical explanation of why this particular index is efficient.
File UTEP-CS-21-78 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 223-229.
In this paper, we show that many seemingly irrational Biblical ideas can actually be rationally interpreted: that God is everywhere, that we can only say what God is not, that God's name is holy, why cannot you bless as many people as you want, etc. We do not insist on our interpretations, there probably are many others, our sole objective was to show that many Biblical ideas can be rationally explained.
File UTEP-CS-21-77 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 25-32.
Why 70/100 is usually a threshold for a student's satisfactory performance? Why there are usually only five letter grades? Why the usual arrangement of research, teaching, and service is 40-40-20? We show that all these arrangements -- and other similar academic arrangements -- can be explained by two ideas: the Laplace Indeterminacy Principle and the seven plus minus two law.
File UTEP-CS-21-76 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 43-51.
In the traditional approach to engineering system design, engineers usually come up with several possible designs, each improving on the previous ones. In coming up with these designs, they try their best to make sure that their designs stay within the safety and other constraints, to avoid potential catastrophic crashes. The need for these safety constraints makes this design process reasonably slow. Software engineering at first followed the same pattern, but then realized that since in most cases, failure of a software test does not lead to a catastrophe, it is much faster to first ignore constraints and then adjust the resulting non-compliant designs so that the constrains will be satisfied. Lately, a similar "move fast and break things" approach was applied to engineering design as well, especially when designing autonomous systems whose failure-when-testing is not catastrophic. In this paper, we provide a simple mathematical model explaining, in quantitative terms, why moving fast and breaking things makes sense.
File UTEP-CS-21-75 in pdf
Published in Proceedings of the IEEE Series of Symposia on Computational Intelligence SSCI'2021, Orlando, Florida, December 4-7, 2021.
In general, many general mathematical formulations of uncertainty quantification problems are NP-hard, meaning that (unless it turned out that P = NP) no feasible algorithm is possible that would always solve these problems. In this paper, we argue that if we restrict ourselves to practical problems, then the correspondingly restricted problems become feasible -- namely, they can be solved by using linear programming techniques.
Original file UTEP-CS-21-74 in pdf
Updated version UTEP-CS-21-74a in pdf
Published in Proceedings of the IEEE Series of Symposia on Computational Intelligence SSCI'2021, Orlando, Florida, December 4-7, 2021.
In many practical situations, we know that there is a functional dependence between a quantity q and quantities a1, ..., an, but the exact form of this dependence is only known with uncertainty. In some cases, we only know the class of possible functions describing this dependence. In other cases, we also know the probabilities of different functions from this class -- i.e., we know the corresponding random field or random process. To solve problems related to such a dependence, it is desirable to be able to simulate the corresponding functions, i.e., to have algorithms that transform simple intervals or simple random variables into functions from the desired class. Many of the real-life dependencies are very complex, requiring a large amount of computation time even if we ignore the uncertainty. So, to make simulation of uncertainty practically feasible, we need to make sure that the corresponding simulation algorithm is as fast as possible. In this paper, we show that for this objective, ideas behind neural networks lead to the known Karhunen-Loeve decomposition and interval field techniques -- and also that these ideas help us go -- when necessary -- beyond these techniques.
Original file UTEP-CS-21-73 in pdf
Updated version UTEP-CS-21-73a in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Financial Econometrics: Bayesian Analysis, Quantum Uncertainty, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 1-7.
Experts' estimates are approximate. To make decisions based on these estimates, we need to know how accurate these estimate are. Sometimes, experts themselves estimate the accuracy of their estimates -- by providing the interval of possible values instead of a single number. In other cases, we can gauge the accuracy of the experts' estimates by asking several experts to estimates the same quantity and using the interval range of these values. In both situations, sometimes the interval is too narrow -- e.g., if an expert is overconfident. Sometimes, the interval is too wide -- if the expert is too cautious. In such situations, we correct these intervals, by making them, correspondingly, wider or narrower. Empirical studies show that people use specific formulas for such corrections. In this paper, we provide a theoretical explanation for these empirical formulas.
File UTEP-CS-21-72 in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), "Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 61-68.
Usually, we mostly gauge individual students' skills. However, in the modern world, problems are rarely solved by individuals, it is usually a group effort. So, to make sure that students are successful, we also need to gauge their ability to collaborate. In this paper, we describe when it is possible to gauge the students' ability to collaborate; in situations when such a determination is possible, we explain how exactly we can estimate these abilities.
File UTEP-CS-21-71 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 49-56.
If we follow the same activity for a long time, our productivity decreases. To increase productivity, a natural idea is therefore to switch to a different activity, and then to switch back and resume the current task. On the other hand, after each switch, we need some time to get back to the original productivity. As a result, too frequent switches are also counterproductive. Natural questions are: shall we switch? if yes, when? In this paper, we use a simple model to provide approximate answers to these questions.
File UTEP-CS-21-70 in pdf
Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Credible Asset Allocation, Optimal Transport Methods, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 185-193.
In many practical situations, we can predict the trend -- i.e., how the system will change -- but we cannot predict the exact timing of this change: this timing may depend on many unpredictable factors. For example, we may be sure that the economy will recover, but how fast it will recover may depend on the status of the pandemic, on the weather-affected agriculture input, etc. In such trend predictions, one of the most efficient methods is signature method, which is based on applying machine learning techniques to several special characteristics of the corresponding time series. In this paper, we provide an explanation for the empirical success of the signature method.
File UTEP-CS-21-69 in pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 57-66.
Somewhat surprisingly, several formulas of quantum physics -- the physics of micro-world -- provide a good first approximation to many social phenomena, in particular, to many economic phenomena, phenomena which are very far from micro-physics. In this paper, we provide three possible explanations for this surprising fact. First, we show that several formulas from quantum physics actually provide a good first-approximation description for many phenomena in general, not only to the phenomena of micro-physics. Second, we show that some quantum formulas represent the fastest way to compute nonlinear dependencies and thus, naturally appear when we look for easily computable models; in this aspect, there is a very strong similarity between quantum techniques and neural networks. Third, due to numerous practical applications of micro-phenomena, many problems related to quantum equations have been solved; so, when we use quantum techniques to describe social phenomena, we can utilize the numerous existing solutions -- which would not have been the case if we use other nonlinear techniques for which not many solutions are known. All this provides an explanation of why quantum techniques work reasonably well in economics.
File UTEP-CS-21-68 in pdf
Published in Acta Polytechnica Hungarica, 2022, Vol. 19, No. 10, pp. 49-80.
It is known that, due to the Central Limit Theorem, the probability distribution of the uncertainty of the result of data processing is, in general, close to Gaussian -- or to a distribution from a somewhat more general class known as infinitely divisible. We show that a similar result holds in the fuzzy case: namely, the membership function describing the uncertainty of the result of data processing is, in general, close to Gaussian -- or to a membership function from an explicitly described more general class.
File UTEP-CS-21-67 in pdf
To appear in Proceedings of the 9th World Conference on Soft Computing, Baku, Azerbaijan, September 24-27, 2024.
How can we apply the ideas that made deep neural networks successful to other aspects of computing? For this purpose, we reformulate these ideas in a more general form -- and we show that this generalization also covers fuzzy and quantum computing. This enables us to suggest that similar ideas can be helpful for fuzzy and quantum computing as well. In this suggestion, we are encouraged by the fact that as we show, to some extent, these ideas are already helpful.
Original file UTEP-CS-21-66 in
pdf
Updated version UTEP-CS-21-66c in
pdf
Published in Asian Journal of Economics and Banking, 2021, 2021, Vol. 5, No. 3, pp. 226-233, DOI 10.1108/AJEB-07-2021-0080
In 1951, Kenneth Arrow proved that it is not possible to have a group decision making procedure that satisfies reasonable requirements like fairness. From the theoretical viewpoint, this is a great result -- well-deserving the Nobel Prize that was awarded to Professor Arrow. However, from the practical viewpoint, the question remains -- so how should we make group decisions? A usual way to solve this problem is to provide some reasonable heuristic ideas, but the problem is that different seemingly reasonable idea often lead to different group decision -- this is known, e.g., for different voting schemes. In this paper, we analyze this problem from the viewpoint of decision theory, the basic theory underlying all our activities -- including economic ones -- and describe the corresponding recommendations. Most of the resulting recommendations have been proposed earlier. The main objective of this paper is to provide a unified coherent narrative that leads from the fundamental first principles to practical recommendations.
File UTEP-CS-21-65 in pdf
Published in Asian Journal of Economics and Banking (AJEB), 2021.
On the one hand, everyone agrees that economics should be fair, that workers should get equal pay for equal work. Any instance of unfairness causes a strong disagreement. On the other hand, in many companies, advanced workers -- who produce more than others -- get paid dispropotionally more for their work, and this does not seem to cause any negative feelings. In this paper, we analyze this situation from the economic viewpoint. We show that from this viewpoint, additional payments for advanced workers indeed make economic sense, benefit everyone, and thus -- in contrast to the naive literal interpretation of fairness -- are not unfair. As a consequence of our analysis, we also explain why the labor share of the companies' income is, on average, close to 50% -- an empirical fact that, to the best of our knowledge, was never previously explained.
File UTEP-CS-21-64 in pdf
Published in: Witold Pedrycz and Shyi-Ming Chen (eds.), Recent Advancements in Multi-View Data Analytics, Springer, 2022, pp. 23-53.
Multi-view techniques help us reconstruct a 3-D object and its properties from its 2-D (or even 1-D) projections. It turns out that similar techniques can be used in processing uncertainty -- where many problems can reduced to a similar task of reconstructing properties of a multi-D object from its 1-D projections. In this chapter, we provide an overview of these techniques.
File UTEP-CS-21-63 in
pdf
Updated version UTEP-CS-21-63a in
pdf
Updated version UTEP-CS-21-63b in
pdf
Published in Journal of Intelligent and Fuzzy Systems, 2022, Vol. 43, No. 6, pp. 6933-6938.
In many applications, including analysis of seismic signals, Daubechies wavelets perform much better than other families of wavelets. In this paper, we provide a possible theoretical explanation for the empirical success of Daubechies wavelets. Specifically, we show that these wavelets are optimal with respect to any optimality criterion that satisfies the natural properties of scale- and shift-invariance.
Original file UTEP-CS-21-62 in
pdf
Revised version UTEP-CS-21-62a in
pdf
Published in Journal of Intelligent and Fuzzy Systems, 2022, Vol 43, No. 6, pp. 6947-6951.
Neural networks -- specifically, deep neural networks -- are, at present, the most effective machine learning techniques. There are reasonable explanations of why deep neural networks work better than traditional "shallow" ones, but the question remains: why neural networks in the first place? why not networks consisting of non-linear functions from some other family of functions? In this paper, we provide a possible theoretical answer to this question: namely, we show that of all families with the smallest possible number of parameters, families corresponding to neurons are indeed optimal -- for all optimality criteria that satisfy some reasonable requirements: : namely, for all optimality criteria which are final and invariant with respect to coordinate changes, changes of measuring units, and similar linear transformations.
Original file UTEP-CS-21-61 in
pdf
Updated version UTEP-CS-21-61a in
pdf
Published in: Nguyen Hoang Phuong and Vladik Kreinovich (eds.), Biomedical and Other Applications of Soft Computing, Springer, Cham, Switzerland, 2023, pp. 41-47.
At present, the most efficient machine learning technique is deep learning, in which non-linearity is attained by using rectified linear functions s(x)=max(0,x). Empirically, these functions work better than any other nonlinear functions that have been tried. In this paper, we provide a possible theoretical explanation for this empirical fact. This explanation is based on the fact that one of the main applications of neural networks is decision making, when we want to find an optimal solution. We show that the need to adequately deal with situations when the corresponding optimization problem is feasible -- i.e., for which the objective function is convex -- uniquely selects rectified linear activation functions.
Original file UTEP-CS-21-60 in pdf
Published in International Journal of Computing, 2022, Vol. 21, No. 4, pp. 411-423.
Many quantum algorithms have been proposed which are drastically more efficient that the best of the non-quantum algorithms for solving the same problems. A natural question is: are these quantum algorithms already optimal -- in some reasonable sense -- or they can be further improved? In this paper, we review recent results showing that many known quantum algorithms are actually optimal. Several of these results are based on appropriate invariances (symmetries).
Original file UTEP-CS-21-59 in pdf
1st updated version UTEP-CS-21-59a in pdf
2nd updated version UTEP-CS-21-59c in pdf
Published in Advances in Artificial Intelligence and Machine Learning, 2021, Vol. 1, No. 1, pp. 81-88.
Fuzzy techniques depend heavily on eliciting meaningful membership functions for the fuzzy sets used. Often such functions are obtained from data. Just as often they are obtained from experts knowledgable of the domain and the problem being addressed. However, there are cases when neither is possible, for example because of insufficient data, or unavailable experts. What functions should one choose and what should guide such choice? This paper argues in favor of using Cauchy membership functions, thus named because their expression is similar to that of the Cauchy distributions. The paper provides a theoretical explanation for this choice.
Original file UTEP-CS-21-58 in pdf
Updated version UTEP-CS-21-58b in pdf
Published in Advances in Artificial Intelligence and Machine Learning, 2022, Vol. 2, No. 2, pp. 385-393.
An important step in designing a fuzzy system is the elicitation of the membership functions for the fuzzy sets used. Often the membership functions are obtained from data in a training-like manner. They are expected to match or be at least compatible with those obtained from experts knowledgeable of the domain and the problem being addressed. In cases when neither are possible, e.g., insufficient data or unavailability of experts, we are faced with the question of hypothesizing the membership function. We have previously argued in favor of Cauchy membership functions (thus named because their expression is similar to that of the Cauchy distributions) and supported this choice from the point of view of efficiency of training. This paper looks at the same family of membership functions from the point of view of reliability.
UTEP-CS-21-58c in pdf
Published in Journal of Smart Environments and Green Computing, 2021, Vol. 1, pp. 146-158.
Environment-related problems are extremely important for mankind, the fate of humanity itself depends on our ability to solve these problems. These problems are complex, we cannot solve them without using powerful computers. Thus, in the environmental research, environment-related computing is one of the main computing-related research directions. Another direction is related to the fact that computing itself can be (and currently is) harmful for the environment. How to make computing more environment-friendly, how to move towards green computing -- this is the second important direction. A third direction is motivated by the very complexity of environmental systems: it is difficult to predict how environment will change. This seemingly negative difficulty can be reformulated in more positive terms: that by observing environmental processes, we can find values which are difficult to compute. A similar observation about quantum processes has led to successful quantum computing, so why not use environmental processes themselves to help us compute? This is the third direction in environment-related computing research. In this paper, we provide three examples that show that in all three directions, non-trivial mathematical analysis can help.
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 289-296.
While Deep Neural Networks (DNNs) have shown incredible performance in a variety of data, they are brittle and opaque: easily fooled by the presence of noise, and difficult to understand the underlying reasoning for their predictions or choices. This focus on accuracy at the expense of interpretability and robustness caused little concern since, until recently, DNNs were employed primarily for scientific and limited commercial work. An increasing, widespread use of artificial intelligence and growing emphasis on user data protections, however, motivates the need for robust solutions with explainable methods and results. In this work, we extend a novel fuzzy based algorithm for regression to multidimensional problems. Previous research demonstrated that this approach outperforms neural network benchmarks while using only 5% of the number of the parameters.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 19-24.
The paper describes and explains the teaching strategy of Iosif Yakovlevich Verebeichik, a successful mathematics teacher at special mathematical high schools -- schools for students interested in and skilled in mathematics. The resulting strategy seems counterintuitive and contrary to all the pedagogical advice. Our explanation is not complete: it worked well for this teacher, but others who tried to follow seemingly the same strategy did not succeed. How he made it work, how can others make it work -- this is still not clear. In the words of Verebeichik himself, while mathematics itself is a science, teaching mathematics is an art, which cannot be reduced to a few recommendations.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 33-38.
In most European universities, the overall student's grade for a course is determined exclusively by this student's performance on the final exam. All intermediate grades -- on homework, quizzes, and previous texts -- are, in effect, ignored. This arrangement helps gauge the student's performance by the knowledge that the student shows at the end of the course. The main drawback of this approach is that some students do not start studying until later, thinking that they can catch up and even get an excellent grade -- and this hurts their performance. To motivate students to study hard throughout the semester, most US universities estimate the overall grade for the course as a weighted average of the grade on the final exam and of all intermediate grades. In this paper, we show that even when a student is already motivated, to accurately gauge the student's level of knowledge it is important to take intermediate grades into account.
File in pdf
Published in Mathematical Structures and Modeling, 2021, Vol. 58, pp. 16-22.
In 2021, we are celebrating the 90th birthday of Revolt Pimenov, a specialist in space-time geometry. He was my teacher. In this article, I am trying to summarize what he taught to his students.
File in pdf
Published in Mathematical Structures and Modeling, 2022, Vol. 62, pp. 130-138.
In all areas of human activity, there are natural ordering relations: causality in space-time physics, preference in decision making, and logical inference in reasoning. In space-time physics, a 1950 theorem by A. D. Alexandrov proved that causality relation is fundamental: many other features, including numerical characteristics of time and space, can be reconstructed from this relation. In this paper, we provide simple proofs that, similarly, the corresponding ordering relations are fundamental in decision making and in logical reasoning.
File in pdf
Published in Mathematical Structures and Modeling, 2021, Vol. 58, pp. 10-15.
This year, Revolt Pimenov, a philosophical thinker whose main ideas were in geometry of space-time, would have turned 90. In this essay, we explain how in the 1950s-70s, when he was most productive, his were two of the five natural and important revolutionary scientific ideas -- along with fuzzy logic, constructive mathematics, and scalar-tensor theory of gravitation, ideas that, in our opinions, still have potential to change the world.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 211-214.
Reasonably recent experiments show that unhappiness is strongly correlated with the excessive interaction between two parts of the brain -- amygdala and hippocampus. At first glance, in situations when outside signals are positive, additional interaction between two parts of the brain that get signals from different sensors should only reinforce the positive feeling. In this paper, we provide a simple explanation of why, instead of the expected reinforcement, we observe unhappiness.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 217-221.
In his unpublished paper, the famous logician Kurt Godel provided arguments in favor of the existence of God. These arguments are presented in a very formal way, which makes them difficult to understand to many interested readers. In this paper, we describe a simplifying modification of Godel's proof which will hopefully make it easier to understand. We also describe, in clear terms, why Godel's arguments are just that -- arguments -- and not a convincing proof.
File in pdf
Published in Advances in Artificial Intelligence and Machine Learning, 2021, Vol. 1, No. 1, pp. 12-25.
As a result of applying fuzzy rules, we get a fuzzy set describing possible control values. In automatic control systems, we need to defuzzify this fuzzy set, i.e., to transform it to a single control value. One of the most frequently used defuzzification techniques is centroid defuzzification. From the practical viewpoint, an important question is: how accurate is the resulting control recommendation? The more accurately we need to implement the control, the more expensive the resulting controller.
The possibility to gauge the accuracy of the fuzzy control recommendation follows from the fact that, from the mathematical viewpoint, centroid defuzzification is equivalent to transforming the fuzzy set into a probability distribution and computing the mean value of control. In view of this interpretation, a natural measure of accuracy of a fuzzy control recommendation is the standard deviation of the corresponding random variable.
Computing this standard deviation is straightforward for the traditional [0,1]-based fuzzy logic, in which all experts' degree of confidence are represented by numbers from the interval [0,1]. In practice, however, an expert usually cannot describe his/her degree of confidence by a single number, a more appropriate way to describe his/her confidence is by allowing to mark an interval of possible degrees. In this paper, we provide an efficient algorithm for estimating the accuracy of fuzzy control recommendations under such interval-valued fuzzy uncertainty.
Original file UTEP-CS-21-48 in pdf
Updated version UTEP-CS-21-48a in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 69-73.
To simplify the design of compilers, Noam Chomsky proposed to first transform a description of a programming language -- which is usually given in the form of a context-free grammar -- into a simplified "normal" form. A natural question is: why this specific normal form? In this paper, we provide an answer to this question.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 159-163.
According to the general idea of quantization, all physical dependencies are only approximately deterministic, and all physical "constants" are actually varying. A natural conclusion -- that some physicists made -- is that Planck's constant (that determines the magnitude of quantum effects) can also vary. In this paper, we use another general physics idea -- the second law of thermodynamics -- to conclude that with time, this constant can only decrease. Thus, with time (we are talking cosmological scales, of course), our world is becoming less quantum.
File in pdf
Published in Asian Journal of Economics and Banking (AJEB), 2021.
Purpose: It is well known that micromanagement -- excessive control of employees -- is detrimental to the employees' morale and thus, decreases their productivity. But what if the managers keep people happy -- will there still be negative consequences of micromanagement? This is the problem analyzed in this paper.
Design/methodology/approach: To analyze our problem, we use general -- but simplified -- mathematical models of how productivity depends on the working rate.
Findings: We show that even in the absence of psychological discomfort, micromanagement is still detrimental to productivity. Interestingly, the negative effect of micromanagement increases as the population becomes more diverse.
Originality/value: This is the first paper in which the purely economic consequences of micromanagement -- separate from its psychological consequences -- are studied in precise mathematical terms, and the first paper that analyzes the relation between these consequences and diversity of the population.
File in pdf
Purpose: While the main purpose of reporting -- e.g., reporting for taxes -- is to gauge the economic state of a company, the fact that reporting is done at pre-determined dates distorts the reporting results. For example, to create a larger impression of their productivity, companies fire temporary workers before the reporting date and re-hire then right away. The purpose of this study is to decide how to avoid such distortion.
Design/methodology/approach: We want to make our solution applicable for all possible reasonable optimality criteria. Thus, we use a general formalism for describing and analyzing all such criteria.
Findings: We show that most distortion problems will disappear if we replace the fixed pre-determined reporting dates with individualized random reporting dates. We also show that for all reasonable optimality criteria, the optimal way to assign reporting dates it to do it uniformly.
Originality/value: We propose a new idea of replacing the fixed pre-determining reporting dates with randomized ones. On the informal level, this idea may have been proposed earlier, but what is completely new is our analysis of which probability distribution for reporting dates is the best for economy: it turns out that under all reasonable optimality criteria, uniform distribution works the best.
Original file UTEP-CS-21-44 in pdf
Updated version UTEP-CS-21-44b in pdf
Updated version UTEP-CS-21-44dV2 in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2022, pp. 121-124.
The main idea behind semi-supervised learning is that when we do not enough human-generated labels, we train a machine learning system based on what we have, and we add the resulting labels (called pseudo-labels) to the training sample. Interesting, this idea works well, but why is somewhat a mystery: we did not add any new information so why is this working? There exist explanations for this empirical phenomenon, but most these explanations are based on complicated math. In this paper, we provide a simple intuitive explanation.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 43-47..
What is 1/0? Students are first taught -- in elementary school -- that it is undefined, then -- in calculus -- then it is infinity. In both cases, the answer is usually provided based on abstract reasoning. But what about the practical meaning? In this paper, we show that, depending on the specific practical problem, we can have different answers to this question: in some practical problems, the correct answer is that 1/0 is undefined, in others, the correct answer is that 1/0 =0 -- and there are probably other practical problems where we can have different answers. Bottom line: there is no unversal answer, the correct answer depends on what practical problem we are considering.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2022, pp. 111-115.
Since in the physical world, most dependencies are smooth (differentiable), traditionally, smooth functions were used to approximate these dependencies. In particular, neural networks used smooth activation functions such as the sigmoid function. However, the successes of deep learning showed that in many cases, non-smooth activation functions like max(0,z) work much better. In this paper, we explain why in many cases, non-smooth approximating functions often work better -- even when the approximated dependence is smooth.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 201-205.
Why are we using the decimal system to describe numbers? Why all over the world, communities with more than 150 folks tend to split? In this paper, we show that both phenomena -- as well as some other phenomena -- can be explained if we take into account the seven plus minus two law, according to which a person can keep in immediate memory from 5 to 9 items.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2022, pp. 39-41.
The fact that oo is actively used as a symbol for infinity shows that this symbol is probably reasonable in this role, but why? In this paper, we provide a possible explanation for why this is indeed a reasonable symbol for infinity.
File in pdf
Published in Entropy, 2021, Vol. 23, Paper 767.
One of the most effective image processing techniques is the use of convolutional neural networks that use convolutional layers. In each such layer, the value of the output at each point is a combination of input data corresponding to several neighboring points. To improve the accuracy, researchers have developed a version of this technique, in which only data from some of the neighboring points is processed. It turns out that the most efficient case -- called dilated convolution -- is when we select the neighboring points whose differences in both coordinates are divisible by some constant l. In this paper, we explain this empirical efficiency by proving that for all reasonable optimality criteria, dilated convolution is indeed better than possible alternatives.
Original file UTEP-CS-21-38 in pdf
Updated version UTEP-CS-21-38a in pdf
Updated version UTEP-CS-21-38b in pdf
Updated version UTEP-CS-21-38c in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 207-210.
Nobelist physicist Lev Landau was known for applying mathematical and physical reasoning to human relations. His advices may have been somewhat controversial, but they were usually well motivated. However, there was one advice for which no explanation remains -- that a person should not marry his/her first and second true loves, and only start thinking about marriage starting with the third true love. In this paper, we provide a possible Landau-style motivation for this advice.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 13-16.
Sigmund Freud famously placed what he called Oedipus complex at the center of his explanation of psychological and psychiatric problems. Freund's analysis was based on anecdotal evidence and intuition, not on solid experiments -- as a result, for a long time, many psychologists dismissed the universality of Freud's findings. However, lately, experiments seem to confirm that indeed men, on average, select wives who resemble their mothers, and women select husbands who resemble their mothers. In this paper, we provide a possible biological explanation for this observational phenomenon.
File in pdf
Published in Proceedings of the 4th International Conference on Uncertainty Quantification in Computational Sciences and Engineering UNCECOMP'2021, Athens, Greece, June 28-30, 2021, pp. 26-34.
In many practical situations, the only information that we know about the measurement error is the upper bound D on its absolute value. In this case, once we know the measurement result X, the only information that we have about the actual value x of the corresponding quantity is that this value belongs to the interval [X − D, X + D]. How can we estimate the accuracy of the result of data processing under this interval uncertainty? In general, computing this accuracy is NP-hard, but in the usual case when measurement errors are relatively small, we can linearize the problem and thus, make computations feasible. This problem is well studied when data processing results in a single value y, but usually, we use the same measurement results to compute the values of several quantities y1, ..., yn. What is the resulting set of tuples (y1, ..., yn)? In this paper, we show that this set is a particular case of what is called a zonotope, and that we can use known results about zonotopes to make the corresponding computational problems easier to solve.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 141-152.
In many real-life situations, the only information that we have about some quantity S is a lower bound T ≤ S. In such a situation, what is a reasonable estimate for S? For example, we know that a company has survived for T years, and based on this information, we want to predict for how long it will continue surviving. At first glance, this is a type of a problem to which we can apply the usual fuzzy methodology -- but unfortunately, a straightforward use of this methodology leads to a counter-intuitive infinite estimate for S. There is an empirical formula for such estimation -- known as Lindy Effect and first proposed by Benoit Mandelbrot -- according to which the appropriate estimate for S is proportional to T: S=c*T, where, with some confidence, the constant c is equal to 1. In this paper, we show that a deeper analysis of the situation enables fuzzy methodology to lead to a finite estimate for S, moreover, to an estimate which is in perfect accordance with the empirical Lindy Effect. Interestingly, a similar idea can help in physics, where also, in some problems, straightforward computations lead to physically meaningless infinite values.
Original file UTEP-CS-21-34 in pdf
Updated version UTEP-CS-21-34a in pdf
Published in Entropy, 2021, Vol. 23, No. 5, Paper 501.
As a system becomes more complex, at first, its description and analysis becomes more complicated. However, a further increase in the system's complexity often makes this analysis simpler. A classical example is Central Limit Theorem: when we have a few independent sources of uncertainty, the resulting uncertainty is very difficult to describe, but as the number of such sources increases, the resulting distribution get close to an easy-to-analyze normal one -- and indeed, normal distributions are ubiquitous. We show that such limit theorems often make analysis of complex systems easier -- i.e., lead to blessing of dimensionality phenomenon -- for all the aspects of these systems: the corresponding transformation, the system's uncertainty, and the desired result of the system's analysis.
Original file UTEP-CS-21-33 in pdf
Updated version UTEP-CS-21-33a in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 3-9.
According to the analysis by the French philosopher Jean-Paul Sartre, the famous French poet and essayist Charles Baudelaire described (and followed) two main -- somewhat unusual -- ideas about art: that art should be vague, and that to create an object of art, one needs to aim for uniqueness. In this paper, we provide an algorithm-based explanation for these seemingly counter-intuitive ideas, explanation related to Kolmogorov complexity-based formalization of Garrett Birkhoff's theory of beauty.
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 75-80.
Instructors from English department praise our students when they use the most sophisticated grammatical constructions and the most appropriate (often rarely used) words -- as long as this helps better convey all the subtleties of the meaning. On the other hand, we usually teach the students to use the most primitive Basic English when writing our papers -- this way, the resulting paper will be most accessible to the international audience. Who is right? In this paper, we analyze this question by using a natural model -- inspired by Zipf's law -- and we conclude that to achieve the largest possible effect, the paper should be written on an intermediate level -- not too primitive, not too sophisticated (actually, on the level of the middle school).
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 297-304.
While we currently only observe 3 spatial dimensions, according to modern physics, our space is actually at least 10-dimensional. In this paper, on different versions of the multi-D spatial models, we analyze how the existence of the additional spatial dimensions can help computations. It turns out that in all the versions, there is some speed up -- moderate when the extra dimensions are actually compactified, and drastic if extra dimensions are separated by a potential barrier.
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 153-157.
According to modern physics, space-time originally was of dimension 11 or higher, but then additional dimensions became compactified, i.e., size in these directions remains small and thus, not observable. As a result, at present, we only observed 4 dimensions of space-time. There are mechanisms that explain how compactification may have occurred, but the remaining question is why it occurred. In this paper, we provide two first-principles-based explanations for space-time compactification: based on Second Law of Thermodynamics and based on geometry and symmetries.
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 63-67.
Historically, to describe numbers, some cultures used bases much larger than our usual base 10, namely, bases 20, 40, and 60. There are explanations for base 60, there is some explanation for base 20, but base 40 -- used in medieval Russia -- remains largely a mystery. In this paper, we provide a possible explanation for all these three bases, an explanation based on the natural need to manage large groups of people. We also speculate why different cultures used different bases.
File in pdf
Published in Joint Proceedings of the 19th World Congress of the International Fuzzy Systems Association (IFSA), the 12th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT), and the 11th International Summer School on Aggregation Operators (AGOP), Bratislava, Slovakia, September 19-24, 2021,, Atlantis Press, Dordrecht, the Netherlands, pp. 282-289.
In many practical situations, users describe their preferences in imprecise (fuzzy) terms. In such situations, fuzzy techniques are a natural way to describe these preferences in precise terms.
Of course, this description is only an approximation to the ideal decision making that a person would perform if we took time to elicit his/her exact preferences. How accurate is this approximation? When can fuzzy decision making -- potentially -- describe the exact decision making, and when there is a limit to the accuracy of fuzzy approximations?
In this paper, we show that decision making can be precisely described in fuzzy terms if and only if different numerical characteristics describing the alternatives are independent -- in the sense that if for two alternatives, all but one characteristics have the same value, then the preference between these two alternatives depends only on the differing characteristic and does not depend on the values of all other characteristics.
Original file UTEP-CS-21-27 in pdf
Updated version UTEP-CS-21-27a in pdf
Published in Joint Proceedings of the 19th World Congress of the International Fuzzy Systems Association (IFSA), the 12th Conference of the European Society for Fuzzy Logic and Technology (EUSFLAT), and the 11th International Summer School on Aggregation Operators (AGOP), Bratislava, Slovakia, September 19-24, 2021,, Atlantis Press, Dordrecht, the Netherlands, pp. 478-485.
A recent book provide examples that a new class of probability distributions and membership functions -- called kappa-regression distributions and membership functions -- leads to better data processing results than using previously known classes. In this paper, we provide a theoretical explanation for this empirical success -- namely, we show that these distributions are the only ones that satisfy reasonable invariance requirements.
Original file UTEP-CS-21-26 in pdf
Updated version UTEP-CS-21-26a in pdf
Published in Applied Mathematical Sciences, 2021, Vol. 15, No. 4, pp. 159-163.
To design and maintain pavements, it is important to know how fast water will penetrate the underlying soil. The speed of this penetration is determined by a quantity called permeability. There are several seemingly very different empirical and semi-empirical formulas that predict permeability. A recent attempt to select the formula that best fits the experimental data ended up in an unexpected conclusion that all three formula provide a good fit for the data. But these formulas are very different, how come that all three of them fit the same data? In this paper, we explain this somewhat paradoxical result.
File in pdf
Published in Applied Mathematical Sciences, 2021, Vol. 15, No. 4, pp. 153-158.
When designing a road, it is important to know how many voids are in the underlying soil -- since these voids will affect the road stiffness. It is difficult to measure the voids ratio directly, so instead, we need to estimate it based on easier-to-measure characteristics such as grain size. There are empirical formulas for such estimation. In this paper, we provide a possible theoretical explanation for these empirical formulas.
File in pdf
We present a corpus of spoken dialogs collected to support research in the automatic detection of times of dissatisfaction. We collected 191 mock customer-merchant dialogs in two conditions: one where the scripts guided the participants to a satisfactory, mutually agreeable outcome, and one where agreement was precluded. Most dialogs were 1 to 5 minutes in length. The corpus and metadata are freely available for research purposes.
File in pdf
Published in International Mathematical Forum, 2021, Vol. 16, No. 2, pp. 95-99.
Most of us are familiar with Roman numerals and with the standard way of describing numbers in the form of these numerals. What many people do not realize is that the actual ancient Romans often deviated from these rules. For example, instead of always writing the number 8 as VIII, i.e., 5 + 3, they sometimes wrote it as IIX, i.e., as 10 − 2. Some of such differences can be explained: e.g., the unusual way of writing 98 as IIC, i.e., as 100 − 2, can be explained by the fact that the Latin word for 98 literally means "two from hundred". However, other differences are not easy to explain -- e.g., why Romans sometimes wrote 8 as VIII and sometimes as IIX. In this paper, we provide a possible explanation for this variation.
File in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 453-460.
Traditional algorithms for addition and multiplication -- that we all study at school -- start with the lowest possible digits. Interestingly, many people in Mexico use a different algorithm, in which operations start with the highest digits. We show that in many situations, this alternative algorithm is indeed more efficient -- especially in typical practical situations when we know the values -- that we need to add or subtract -- only with uncertainty.
Original file UTEP-CS-21-21 in pdf
Updated version UTEP-CS-21-21a in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 74-78.
One of big challenges of many state-of-the-art AI techniques such as deep learning is that their results do not come with any explanations -- and, taking into account that some of the resulting conclusions and recommendations are far from optimal, it is difficult to distinguish good advice from bad one. It is therefore desirable to come up with explainable AI. In this paper, we argue that fuzzy techniques are a proper way to this explainability, and we also analyze which fuzzy techniques are most appropriate for this purpose. Interestingly, it turns out that the answer depends on what problem we are solving: e.g., different "and"- and "or"-operations are preferable when we are controlling a single object and when we are controlling a group of objects.
Original file UTEP-CS-21-20 in pdf
Updated version UTEP-CS-21-20a in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 400-405.
Decades ago, machine learning was not as good as human learning, so many machine learning techniques were borrowed from how we humans learn -- be it on the level of concepts or on the level of biological neurons, cells responsible for mental activities such as learning. Lately, however, machine learning techniques such as deep learning have started outperforming humans. It is therefore time to start borrowing the other way around, i.e., using machine learning experience to improve our human teaching and learning. In this paper, we describe several relevant ideas -- and explain how some of these ideas are related to fuzzy logic and fuzzy techniques.
Original file UTEP-CS-21-19 in pdf
Updated version UTEP-CS-21-19a in pdf
To appear in Procedings of the 3rd International Conference on Intelligent and Fuzzy Systems INFUS'2021, Izmir, Turkey, August 24-26, 2021.
In the traditional fuzzy logic, we can use "and"-operations (also known as t-norms) to estimate the expert's degree of confidence in a composite statement A&B based on his/her degrees of confidence d(A) and d(B) in the corresponding basic statements A and B. But what if we want to estimate the degree of confidence in A&B&C in situations when, in addition to the degrees of estimate d(A), d(B), and d(C) of the basic statements, we also know the expert's degrees of confidence in the pairs d(A&B), d(A&C), and d(B&C)? Traditional "and"-operations can provide such an estimate -- but only by ignoring some of the available information. In this paper, we show that, by going beyond the traditional "and"-operations, we can find a natural estimate that takes all available information into account -- and thus, hopefully, leads to a more accurate estimate.
Original file UTEP-CS-21-18 in pdf
Updated version UTEP-CS-21-18a in pdf
Published in Journal of Pure and Applied Mathematics, 2021, Vol. 12, No. 1, pp. 41-53.
At first glance, Zadeh's ideas that everything is a matter of degree seem to be more appropriate for situations when we do not know the exact equations, when we only have expert rules for control and/or decision making. From this viewpoint, it may seem that in physics, where equations are ubiquitous and all the terms seem precise, there is not much place for fuzziness. But, as we show, in reality, fuzzy ideas can help -- and help dramatically -- in physics as well: in spite of the first impression, as physicists know well, many arguments in physics rely heavily on physical intuition, on imprecise terms like randomness, and a natural formalization of these terms makes them a matter of degree.
Original file UTEP-CS-21-17 in pdf
Updated version UTEP-CS-21-17a in pdf
Published in Mathematical Structures and Modeling, 2021, Vol. 60, pp. 44-49.
In many physical theories, there is a -- somewhat surprising -- similarity between events corresponding to large distances R and events corresponding to very small distances 1/R. Such similarity is known as T-duality. At present, the only available explanation for T-duality comes from a complex mathematical analysis of the corresponding formulas. In this paper, we provide an alternative explanation based on the fundamental notion of causality.
File in pdf
Published in Applied Mathematical Sciences, 2021, Vol. 15, No. 3, pp. 131-136.
It is known that the most effective protection from Covid-19 comes if the vaccination is done in two doses separated by several weeks. In this paper, we provide a qualitative explanation for this empirical fact.
File in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 499-504.
Many phenomena like burnout are gauged by computing a linear combination of user-provided Likert-scale values. The problem with this traditional approach is that, while it makes sense to have linear combination of weights or other physical characteristics, a linear combination of Likert-scale values like "good" and "satisfactory" does not make sense. The only reason why linear combinations are used in practice is that the corresponding data processing tools are readily available. A more adequate approach would be to use fuzzy logic -- a technique specifically designed to deal with Likert-scale values. We show that fuzzy logic actually leads to a linear combination -- but not of the original degrees, but of their transformed values. The corresponding transformation function -- as well as the coefficients of the corresponding linear combination -- must be determined from the condition that the resulting expression best fits the available data.
Original file UTEP-CS-21-14 in pdf
Updated version UTEP-CS-21-14a in pdf
Published in International Mathematical Forum, 2021, Vol. 16, No. 2, pp. 57-61.
In the 13-15 centuries, many European monasteries used an unusual number system developed originally by the Cistercian monks; later on, this system was used by winemakers. In this paper, we provide a possible explanation of why these particular symbols were used.
File in pdf
Pubished in Applied Mathematical Sciences, 2021, Vol. 15, No. 3, pp. 113-118.
To divide two numbers a and b, modern computers use an algorithm which is more efficient that what we humans normally do: they compute a*(1/b), where for all sufficiently small integers b, the inverse 1/b is pre-computed. For fractions, when both a and b are integers, this algorithm requires only one multiplication. Can we make the procedure even faster by not using multiplication at all? To do this, we need to represent each fraction as the sum of inverses -- which, interestingly, is how ancient Egyptians represented fractions.
File in pdf
Published in Applied Mathematical Sciences, 2021, Vol. 15, No. 2, pp. 95-99.
In many practical situations, we need to make a binary decision based on the available data: whether an incoming email is a spam or not, whether to give a bank loan to a company, etc. In many such situations, we can (and do) use machine learning to come up with such a decision. The problem is that while the results of a machine learning model are not 100% reliable, the existing machine learning algorithms do not allow us to decide how reliable is each result. In this paper, for simple examples, we provide a technique for gauging this reliability.
File in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 196-202.
In principle, one can have a continuous functional dependence y=f(x1,...,x_n) for which, for each proper subset of n+1 variable x1,...,x_n,y, there is no relation: i.e., for each selection of n variables out of these n+1, all combinations of these n values are possible. However, for fuzzy operations, there is always some non-trivial relation between y and one of the inputs xi; for example, for "and"-operations (t-norms) y=f(x1,x2), we have y ≤ x1; for "or"-operations (t-conorms) y=f(x1,x2) we have x1 ≤ y, etc. In this paper, we prove a general mathematical explanation for this empirical fact.
Original file UTEP-CS-21-10 in pdf
Updated version UTEP-CS-21-10b in pdf
Published in International Mathematical Forum, 2021, Vol. 16, No. 1, pp. 29-33.
In one of the Biblical stories, prophet Balaam blesses the tents of Israel for being good. But what can be so good about the tents? The traditional Rabbinical interpretation is that the placement of the tents provided full privacy. In our previous paper, we considered the consequences of visual privacy: from each entrance, one cannot see what is happening at any other entrance. In this paper, we analyze the possible consequences of audio privacy: from each tent, you cannot hear what is going on in other tents.
File in pdf
Published in Proceedings of the 10th International Scientific-Practical Conference "Mathematical Education in Schools and Universities" MATHEDU'2021, Kazan, Russia, March 22-28, 2021, pp. 105-110.
Everyone -– instructors and students –- want to make sure that grading of each test is fair, that the only thing that determines the students’ grade is their level of knowledge, that different students get the same penalty for the same mistake, irrespective of their gender, of their past grades, of their behavior in the class, of how many classes they missed, etc. How to help instructors achieve this goal? How to make sure that students are convinced that grading was indeed fair? In this paper, we describe possible measures: anonymous submissions, forming (and posting for all the student to see) an exact grading algorithm, and posting anonymized versions of all the solutions submitted by all the students. To implement these measures, it is necessary to have a centralized computer system that will generate random numbers or random emails for students to submit their tests –- but such a system is reasonably easy to design.
File in pdf
Published in Asian Journal of Economics and Banking (AJEB), 2021.
Purpose: The purpose of the study is to analyze when -- while predicting the future price of a financial instrument -- we should stop computations and start using this information for the actual investment.
Design/methodology/approach: We derive the explicit formulas explaining how the resulting gain depends on the duration of computations.
Findings: We provide an algorithm that enables us to decide the computation time that leads to the largest possible gain.
Originality/value: To the best of our knowledge, this is the first solution to the problem. Following our recommendations will allow investors to select the computation time for which the resulting gain is the largest possible.
Keywords: Investment; Optimal investment portfolio; Computation time.
File in pdf
Published in Proceedings of the 14th International Workshop on Constraint Programming and Decision Making CoProd'2021, Szeged, Hungary, September 12, 2021; extended version published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Uncertainty and Constraints: A Why-Book, Springer, Cham, Switzerland, 2023, pp. 89-92.
A recent experiment has shown that out of all possible biological membranes, homogeneous ones proved the most efficient water desalination. In this paper, we show that natural symmetry ideas lead to a theoretical explanation for this empirical fact.
File in pdf
Published in Julia Rayz, Victor Raskin, Scott Dick, and Vladik Kreinovich (eds.), Explainable AI and Other Applications of Fuzzy Techniques, Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2021, West Lafayette, Indiana, June 7-9, 2021, Springer, Cham, Switzerland, 2022, pp. 52-61.
In real life, we often need to make a decision, i.e., we need to select one of the possible alternatives. In many practical situations, our objective is financial: we need to select an alternative that will bring us the largest financial gain. The problem is that usually, we only know the gain corresponding to each alternative with some uncertainty: instead of the exact numerical value of this gain, there is a whole set of possible values of this gain. How can we make decisions under such interval-valued uncertainty? An answer to this question is known for the case when these sets are closed. In this paper, we extend this result of the general case, when sets are not necessarily closed.
Original file UTEP-CS-21-05 in pdf
Updated version UTEP-CS-21-05b in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, Doan Thanh Ha, and Nguyen Duc Trung (eds.), Financial Econometrics: Bayesian Analysis, Quantum Uncertainty, and Related Topics, Springer, Cham, Switzerland, 2022, pp. 105-109.
In many practical situations, we know the lower and upper bounds L and U on possible values of a quantity x. In such situations, the probability distribution of this quantity is also located on the corresponding interval [L, U]. In many such cases, the empirical probability distribution has the form
File in pdf
Published in International Mathematical Forum, 2021, Vol. 16, No. 1, pp. 23-27.
Once we have partial knowledge, what next question do we usually pursue? Empirical study shows, e.g., that if we know that A \/ B is true, but we do not know whether A is true or B is true, then the usual next step is to ask whether A is true or B is true. This selection of the next step is in line with the constructive approach to knowledge, in which when A \/ B is true, this means that we either know that A is true, or we know that B is true. In this paper, we provide a possible explanation for this empirical selection of the future question-to-ask.
File in pdf
Published in Journal of Combinatorics, Information, and System Sciences JCISS, 2020, Vol. 45, No. 1-4, pp. 1-10.
In a usual Numerical Methods class, students learn that gradient descent is not an efficient optimization algorithm, and that more efficient algorithms exist, algorithms which are actually used in state-of-the-art numerical optimization packages. On the other hand, in solving optimization problems related to machine learning -- and, in particular, in currently most efficient deep learning -- gradient descent (in the form of backpropagation) is much more efficient than any of the alternatives that have been tried. How can we reconcile these two statements? In this paper, we explain that, in reality, there is no contradiction here. Namely, in usual applications of numerical optimization, we want to attain the smallest possible value of the objective function. Thus, after a few iterations, it is necessary to switch from gradient descent -- which only works efficiently when we are sufficiently far away from the actual minimum -- to more sophisticated techniques. On the other hand, in machine learning, as we show, attaining the actual minimum is not what we want -- this would be over-fitting. We actually need to stop way before we reach the actual minimum. Thus, we do not need to get too close to the actual minimum -- and so, there is no need to switch from gradient descent to any more sophisticated (and more time-consuming) optimization technique. This explains why -- contrary to what students learn in Numerical Methods -- gradient descent is the most efficient optimization technique in machine learning applications.
File in pdf
Published in Applied Mathematical Sciences, 2021, Vol. 15, No. 1, pp. 9-14.
In many practical situations, we need to migrate the existing software package to a new programming language and/or a new operating system. In such a migration, it is important to be able to accurately estimate time needed for this migration: if we underestimate this time, we will lose money and may go bankrupt; if we overestimate this time, other companies who estimate more accuracy will outbid us, and we will lose the contract. The formulas currently used for estimating migration time often lead to underestimation. In this paper, we start with the main ideas behind the existing formulas, and show that a deeper analysis of the situation leads to more accurate estimates. Our empirical study shows that the new, more accurate formulas do not suffer from underestimation.
File in pdf