Published in: M. Papadrakakis, V. Papadopoulos, G. Stefanou (eds.), Proceedings of the 3rd ECCOMAS Thematic Conference on Uncertainty Quantification in Computational Sciences and Engineering UNCECOMP'2019, Crete, Greece, June 24-26, 2019, pp. 713-723.
In engineering simulation a black-box code is often a complex, legacy or proprietary (secret) black-box software used to describe the physics of the system under study. Strategies to propagate epistemic uncertainty through such codes are desperately needed, for code verification, sensitivity, and validation on experimental data. Very often in practice, the uncertainty in the inputs is characterised by imprecise probability distributions or distributions with interval parameters, also known as probability boxes. In this paper we propose a strategy based on line sampling to propagate both aleatory and epistemic uncertainty through black-box codes to obtain interval probabilities of failure. The efficiency of the proposed strategy is demonstrated on the NASA LaRC UQ problem.
Published in Reliable Computing, 2020, Vol. 27, pp. 12-20.
In many practical applications ranging from self-driving cars to industrial application of mobile robots, it is important to take interval uncertainty into account when performing odometry, i.e., when estimating how our position and orientation ("pose") changes over time. In particular, one of the important aspects of this problem is detecting mismatches (outliers). In this paper, we describe an algorithm to compute the rigid body transformation, including a provably optimal sub-algorithm for detecting mismatches.
Original file UTEP-CS-19-114 in pdf
Updated version UTEP-CS-19-114a in pdf
Published in Foundations of Science, 2021, Vol. 26, pp. 703-725, doi 10.1007/s10699-020-09659-z
Joule's Energy Conservation Law was the first "meta-law": a general principle that all physical equations must satisfy. It has led to many important and useful physical discoveries. However, a recent analysis seems to indicate that this meta-law is inconsistent with other principles -- such as the existence of free will. We show that this conclusion about inconsistency is based on a seemingly reasonable -- but simplified -- analysis of the situation. We also show that a more detailed mathematical and physical analysis of the situation reveals that not only Joule's principle remains true -- it is actually strengthened: it is no longer a principle that all physical theories should satisfy -- it is a principle that all physical theories do satisfy.
Original file UTEP-CS-19-113 in pdf
Updated version UTEP-CS-19-113a in pdf
Published in: Marie-Jeanne Lesot, Susana Vieira, Marek Z. Reformat, Joao Paulo Carvalho, Anna Wilbik, Bernadette Bouchon-Meunier, and Ronald R. Yager (eds.), Proceedings of the 18th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU'2020, Lisbon, Portugal, June 15-19, 2020, pp. 70-79.
In many practical situations, we only know the interval containing the quantity of interest, we have no information about the probabilities of different values within this interval. In contrast to the cases when we know the distributions and can thus use Monte-Carlo simulations, processing such interval uncertainty is difficult -- crudely speaking, because we need to try all possible distributions on this interval. Sometimes, the problem can be simplified: namely, for estimating the range of values of some characteristics of the distribution, it is possible to select a single distribution (or a small family of distributions) whose analysis provides a good understanding of the situation. The most known case is when we are estimating the largest possible of Shannon's entropy: in this case, it is sufficient to consider the uniform distribution on the interval. Interesting, estimating other characteristics leads to the selection of the same uniform distribution: e.g., estimating the largest possible values of generalized entropy or of some sensitivity-related characteristics. In this paper, we provide a general explanation of why uniform distribution appears in different situations -- namely, it appears every time we have a permutation-invariant optimization problem with the unique optimum. We also discuss what happens if we have an optimization problem that attains its optimum at several different distributions -- this happens, e.g., when we are estimating the smallest possible value of Shannon's entropy (or of its generalizations).
Original version UTEP-CS-19-112 in pdf
Updated version UTEP-CS-19-112a in pdf
Published in: Marie-Jeanne Lesot, Susana Vieira, Marek Z. Reformat, Joao Paulo Carvalho, Anna Wilbik, Bernadette Bouchon-Meunier, and Ronald R. Yager (eds.), Proceedings of the 18th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU'2020, Lisbon, Portugal, June 15-19, 2020, pp. 59-69.
/ Current artificial neural networks are very successful in many machine learning applications, but in some cases they still lag behind human abilities. To improve their performance, a natural idea is to simulate features of biological neurons which are not yet implemented in machine learning. One of such features is the fact that in biological neural networks, signals are represented by a train of spikes. Researchers have tried adding this spikiness to machine learning and indeed got very good results, especially when processing time series (and, more generally, spatio-temporal data). In this paper, we provide a possible theoretical explanation for this empirical success.
Original version UTEP-CS-19-111 in pdf
Updated version UTEP-CS-19-111b in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 141-151.
In the usual 2-valued logic, from the purely mathematical viewpoint, there are many possible binary operations. However, in commonsense reasoning, we only use a few of them: why? In this paper, we show that fuzzy logic can explain the usual choice of logical operations in 2-valued logic.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 133-140.
In the traditional fuzzy logic, experts' degrees of confidence are described by numbers from the interval [0,1]. Clearly, not all the numbers from this interval are needed: in the whole history of the Universe, there will be only countably many statements and thus, only countably many possible degree, while the interval [0,1] is uncountable. It is therefore interesting to analyze what is the set S of actually used values. The answer depends on the choice of "and"-operations (t-norms) and "or"-operations (t-conorms). For the simplest pair of min and max, any finite set will do -- as long as it is closed under negation 1 &minus a. For the next simplest pair -- of algebraic product and algebraic sum -- we prove that for a finitely generated set, if the "and"-operation is exact, then the "or"-operation is almost always approximate, and vice versa. For other "and"- and "or"-operations, the situation can be more complex.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 43-50.
It is known that the distribution of seismic inter-event times is well described by the Gamma distribution. Recently, this fact has been used to successfully predict major seismic events. In this paper, we explain that the Gamma distribution of seismic inter-event times can be naturally derived from the first principles.
File in pdf
Published in: Evgeny Katz (ed.), DNA- and RNA-Based Computing Systems, Wiley, Hoboken, New Jersey, 2021, pp. 213-230.
The traditional DNA computing schemes are based on using or simulating DNA-related activity. This is similar to how quantum computers use quantum activities to perform computations. Interestingly, in quantum computing, there is another phenomenon known as computing without computing, when, somewhat surprisingly, the result of the computation appears without invoking the actual quantum processes. In this chapter, we show that similar phenomenon is possible for DNA computing: in addition to the more traditional way of using or simulating DNA activity, we can also use DNA inactivity to solve complex problems. We also show that while DNA computing without computing is almost as powerful as traditional DNA computing, it is actually somewhat less powerful. As a side effect of this result, we also show that, in general, security is somewhat more difficult to maintain than privacy, and data storage is more difficult than data transmission.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 21-26.
At present, we mostly use decimal (base-10) number system, but in the past, many other systems were used: base-20, base-60 -- which is still reflected in how we divide an hour into minutes and a minute into seconds -- and many others. There is a known explanation for the base-60 system: 60 is the smallest number that can be divided by 2, by 3, by 4, by 5, and by 6. Because of this, e.g., half an hour, one-third of an hour, all the way to one-sixth of an hour all correspond to a whole number of minutes. In this paper, we show that a similar idea can explain all historical number systems, if, instead of requiring that the base divides {\it all} numbers from 2 to some value, we require that the base divides all but one (or all but two) of such numbers.
File in pdf
To appear in: Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence ISMSI'2020, Thimpu, Bhutan, March 21-22, 2020.
Successes of deep learning are partly due to appropriate selection of activation function, pooling functions, etc. Most of these choices have been made based on empirical comparison and heuristic ideas. In this paper, we show that many of these choices -- and the surprising success of deep learning in the first place -- can be explained by reasonably simple and natural mathematics.
Original file UTEP-CS-19-105 in pdf
Updated version UTEP-CS-19-105b in pdf
To appear in: Proceedings of the 2020 4th International Conference on Intelligent Systems, Metaheuristics & Swarm Intelligence ISMSI'2020, Thimpu, Bhutan, March 21-22, 2020.
Several decades ago, traditional neural networks were the most efficient machine learning technique. Then it turned out that, in general, a different technique called support vector machines is more efficient. Reasonably recently, a new technique called deep learning has been shown to be the most efficient one. These are empirical observations, but how we explain them -- thus making the corresponding conclusions more reliable? In this paper, we provide a possible theoretical explanation for the above-described empirical comparisons. This explanation enables us to explain yet another empirical fact -- that sparsity techniques turned out to be very efficient in signal processing.
Original file UTEP-CS-19-104 in pdf
Updated version UTEP-CS-19-104b in pdf
Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 433-437.
It can be proven that linear dynamical systems exhibit either stable behavior, or unstable behavior, or oscillatory behavior, or transitional behavior. Interesting, the same classification often applies to nonlinear dynamical systems as well. In this paper, we provide a possible explanation for this phenomenon, i.e., we explain why a classification based on linear approximation to dynamical systems often works well in nonlinear cases.
File in pdf
Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 373-378.
In this paper, we show that physicists' intuition about randomness is not fully consistent with their belief that every theory is only approximate. We also prove that there is no formal way to reconcile these two intuitions, this reconciliation has to be informal. Thus, there are fundamental reasons why informal knowledge is needed for describing the real world.
File in pdf
Published in Russian Digital Libraries Journal, 2019, Vol. 22, No. 6, pp. 773-779.
In some classes, students want to get a desired passing grade (e.g., C or B) by spending the smallest amount of effort. In such situations, it is reasonable for the instructor to assign the grades for different tasks in such a way that the resulting overall student's effort is the largest possible. In this paper, we show that to achieve this goal, we need to assign, to each task, the number of points proportional to the efforts needed for this task.
File in pdf
Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2020, Redmond, Washington, August 20-22, 2020, pp. 427-432.
In many application areas, it is important to predict the user's reaction to new products. In general, this reaction changes with time. Empirical analysis of this dependence has shown that it can be reasonably accurately described by a power law. In this paper, we provide a theoretical explanation for this empirical formula.
File in pdf
Published in Russian Digital Libraries Journal, Vol. 22, No. 6, pp. 763-768.
Ancient Egyptians represented each fraction as a sum of unit fractions, i.e., fractions of the type 1/n. In our previous papers, we explained that this representation makes perfect sense: e.g., it leads to an efficient way of dividing loaves of bread between people. However, one thing remained unclear: why, when representing fractions of the type 2/(2k+1), Egyptians did not use a natural representation 1/(2k+1) + 1/(2k+1), but used a much more complicated representation instead. In this paper, we show that the need for such a complicated representation can be explained if we take into account that instead of cutting a rectangular-shaped loaf in one direction -- as we considered earlier -- we can simultaneously cut it in two orthogonal directions. For example, to cut a loaf into 6 pieces, we can cut in 2 pieces in one direction and in 3 pieces in another direction. Together, these cuts will divide the original loaf into 2*3=6 pieces.
It is known that Egyptian fractions are an exciting topics for kids, helping them better understand fractions. In view of this fact, we plan to use our new explanation to further enhance this understanding.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 7-14.
It is well known that, according to the second law of thermodynamics, the entropy of a closed system increases (or at least stays the same). In many situations, this increase is the smallest possible. The corresponding minimum entropy production principle was first formulated and explained by a future Nobelist Ilya Prigogine. Since then, many possible explanations of this principle appeared, but all of them are very technical, based on complex analysis of differential equations describing the system's dynamics. Since this phenomenon is ubiquitous for many systems, it is desirable to look for a general system-based explanation, explanation that would not depend on the specific technical details. Such an explanation is presented in this paper.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 180-185.
The famous EPR paradox shows that if we describe quantum particles in the usual way -- by their wave functions -- then we get the following seeming contradiction. If we entangle the states of the two particles, then move them far away from each other, and measure the state of the first particle, then the state of the second particle immediately changes -- which contradicts to special relativity, according to which such immediate-action-at-a-distance is not possible. It is known that, from the physical viewpoint, this is not a real paradox: if we measure any property of the second particle, the results will not change whether we perform the measurement on the first particle or not. What the above argument shows is that the usual wave function description of a quantum state does not always adequately describe the corresponding physics. In this paper, we propose a new, more physically adequate description of a quantum state, a description in which there is no EPR paradox: measurements performed at the first particle does not change the state of the remote second one.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 193-196.
In many application areas, the dynamics of a system is described by a harmonic oscillator with quadratic damping. In particular, this equation describes the flapping of the insect wings and is, thus, used in the design of very-small-size unmanned aerial vehicles. The fact that the same equation appears in many application areas, from stability of large structures to insect flight, seems to indicate that this equation follows from some fundamental principles. In this paper, we show that this is indeed the case: this equation can be derived from natural invariance requirements.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 186-188.
A statistical analysis of hundreds of successful and unsuccessful revolution attempts led historian to a very unexpected conclusion: that most attempts involving at least 3.5% of the population succeeded, while most attempts that involved a smaller portion of the population failed. In this paper, we show that this unexpected threshold can be explained based on the other two known rules of human behavior: the 80/20 rule (20% of the people drink 80% of the beer) and 7 plus minus 2 law according to which we naturally divide everything into 7 plus minus 2 classes.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 220-223.
There are a lot of commonsense advices in decision making: e.g., we should consider multiple scenarios, we should consult experts, we should play down emotions. Many of these advices come supported by a surprisingly consistent quantitative evidence. In this paper, on the example of the above advices, we provide a theoretical explanations for these quantitative facts.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 168-171.
Numerical experiments show that for classifying neural networks, it is beneficial to select a smaller deviation for initial weights that what is usually recommended. In this paper, we provide a theoretical explanation for these unexpected simulation results.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 172-175.
If we place a can of coke that weigh 0.35 kg into a car that weighs 1 ton = 1000 kg, what will be the resulting weight of the car? Mathematics says 1000.35 kg, but common sense says 1 ton. In this paper, we show that this common sense answer can be explained by the Hurwicz optimism-pessimism criterion of decision making under interval uncertainty.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 1-5.
One of the main motivations for designing computer models of complex systems is to come up with recommendations on how to best control these systems. Many complex real-life systems are so complicated that it is not computationally possible to use realistic nonlinear models to find the corresponding optimal control. Instead, researchers make recommendations based on simplified -- e.g., linearized -- models. The recommendations based on these simplified models are often not realistic but, interestingly, they can be made more realistic if we "tone them down" -- i.e., consider predictions and recommendations which are close to the current status quo state. In this paper, we analyze this situation from the viewpoint of general system analysis. This analysis explain the above empirical phenomenon -- namely, we show that this "status quo bias" indeed helps decision makers to take nonlinearity into account.
File in pdf
Published in Matematical Structures and Modeling, 2020, Vol. 53, pp. 104-107.
Changes in the elderlies depression level result from a large number of small independent factors. Such situations are ubiquitous in applications. In most such cases, due to the Central Limit Theorem, the corresponding distribution is close to Gaussian. For the changes in the elderlies depression level, however, the empirical distribution is far from Gaussian: it is uniform. In this paper, we provide a possible explanation for the emergence of the uniform distribution.
File in pdf
Published in Matematical Structures and Modeling, 2020, Vol. 53, pp. 144-148.
One of the biases potentially affecting systems engineers is the confirmation bias, when instead of selecting the best hypothesis based on the data, people stick to the previously-selected hypothesis until it is disproved. In this paper, on a simple example, we show how important is to take care of this bias: namely, that because of this bias, we need twice as many experiments to switch to a better hypothesis.
File in pdf
Published in International Mathematical Forum, 2019, Vol. 14, No. 5, pp. 209-213.
To compare two different hypothesis testing techniques, researchers use the following heuristic idea: for each technique, they form a curve describing how the probabilities of type I and type II errors are related for this technique, and then compare areas under the resulting curves. In this paper, we provide a justification for this heuristic idea.
File in pdf
Published in International Mathematical Forum, 2019, Vol. 14, No. 5, pp. 205-208.
The 80/20 rule and the 7 plus minus 2 law are examples of difficult to explain empirical facts. According to the 80/20 rule, in each activity, 20% of the people contribute to the 80% of the results. The 7 plus minus 2 law means that we divide objects into 7 plus minus 2 groups -- i.e., into 5 to 9 groups. In this paper, we show that there is a relation between these two facts: namely, we show that, because of the 80/20 rule, the number of classes cannot be smaller than 5. Thus, the 80/20 rule explains the lower bound (5) on the 7 plus minus 2 law.
File in pdf
Published in Journal of Uncertain Systems, 2020, Vol. 13, No. 3, pp. 211-215.
To predict how the Pavement Condition Index will change over time, practitioners use a complex empirical formula derived in the 1980s. In this paper, we provide a possible theoretical explanation for this formula, an explanation based on general ideas of invariance. In general, the existence of a theoretical explanation makes a formula more reliable; thus, we hope that our explanation will make predictions of road quality more reliable.
File in pdf
Published in International Journal of Unconventional Computing, 2020, Vol. 15, No. 3, pp. 193-218.
Traditional physics assumes that space and time are continuous. However, a deeper analysis shows that this seemingly reasonable space-time model leads to some serious physical problems. One of the approaches that physicists have proposed to solve these problems is to assume that the space-time is actually discrete. In this paper, we analyze possible computational consequences of this discreteness. It turns out that in a discrete space-time, it is probably possible to solve NP-complete problems in polynomial time: namely, this is possible in almost all physically reasonable models of dynamics in discrete space-time (almost all in some reasonable sense).
Original file UTEP-CS-19-85 in pdf
Updated version UTEP-CS-19-85c in pdf
Published in Geombinatorics, 2021, Vol. 30, No. 4, pp. 208-213.
In the first approximation, the shape of our Galaxy -- as well as the shape of many other celestial bodies -- can be naturally explained by geometric symmetries and the corresponding invariances. As a result, we get the familiar shape of a planar spiral. A recent more detailed analysis of our Galaxy's shape has shown that the Galaxy somewhat deviates from this ideal shape: namely, it is not perfectly planar, it is somewhat warped in the third dimension. In this paper, we show that the empirical formula for this warping can also be explained by geometric symmetries and invariance.
File in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 16, pp. 775-779.
A recent patent shows that filtering out higher harmonics helps people sing in-tune. In this paper, we use the general signal processing ideas to explain this empirical phenomenon. We also show that filtering out higher harmonics is the optimal way of increasing the signal-to-noise ratio -- and thus, of making it easier for people to recognize when they are signing out of tune.
File in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 16, pp. 769-773.
To adequately describe the planets' motion, ancient astronomers used epicycles, when a planet makes a circular motion not around the Earth, but around a special auxiliary point which, in turn, performs a circular motion around the Earth -- or around a second auxiliary point which, in turns, rotates around the Earth, etc. Standard textbooks malign this approach by calling it bad science, but in reality, this is, in effect, trigonometric (Fourier) series -- an extremely useful tool in science and engineering. It should be mentioned, however, that the epicycles are almost as good as trigonometric series -- in the sense that in some cases, they need twice as many parameters to achieve the same accuracy.
File in pdf
Published in Journal of Uncertain Systems, 2020, Vol. 13, No. 3, pp. 233-235.
Testing is a very important part of quality control in education. To decide how to best test, it makes sense to use experience of other areas where testing is important, where there is a large amount of experimental data comparing the efficiency of different testing strategies. One such area is software engineering. The experience of software engineering shows that the most efficient approach to testing is to test thoroughly on every single stage of the project. In regards to teaching, the resulting recommendation means making testing as frequent as possible, preferably giving weekly quizzes. At first glance, this may seem difficult, since grading quizzes -- especially for big classes -- requires a lot of time, and instructors usually do not have that much time. This problem can be solved by giving multiple-choice quizzes for which grading can be automatic. Automatic grading also helps make grading more objective -- and thus, eliminate perceived grading subjectivity as a potential problem affecting student learning.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 52, pp. 134-140.
To many students, the notion of a derivative seems unrelated to any previous mathematics -- and is, thus, difficult to study and to understand. In this paper, we show that this notion can be naturally derived from a more intuitive notion of invariance.
File in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
In their 1983 paper, C. Alsina, E. Trillas, and L. Valverde proved that distributivity, monotonicity, and boundary conditions imply that the "and"-operation on the interval [0,1] is min and the "or"-operation is max. In this paper, we extend this result to general partially ordered sets with the greatest element (denoted by 1) and the least element (denoted by 0), and we also show that all the above conditions are necessary for this result to be true.
Original file UTEP-CS-19-79 in pdf
Updated version UTEP-CS-19-79b in pdf
Published in V. Kreinovich (ed.), Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas, Springer, Cham, Switzerland, 2021, pp. 133-143.
To make an adequate decision, we need to know the probabilities of different consequences of different actions. In practice, we only have partial information about these probabilities -- this situation is known as imprecise probabilities. A general description of all possible imprecise probabilities requires using infinitely many parameters. In practice, the two most widely used few-parametric approximate descriptions are p-boxes (bounds on the values of the cumulative distribution function) and interval-valued moments (i.e., bounds on moments). In some situations, these approximations are not sufficiently accurate. So, we need more accurate more-parametric approximations. In this paper, we explain what are the natural next approximations.
File in pdf
Published in V. Kreinovich (ed.), Statistical and Fuzzy Approaches to Data Processing, with Applications to Econometrics and Other Areas, Springer, Cham, Switzerland, 2021, pp. 145-156.
In many practical situations, we only have partial information about the probabilities; this means that there are several different probability distributions which are consistent with our knowledge. In such cases, if we want to select one of these distributions, it makes sense not to pretend that we have a small amount of uncertainty -- and thus, it makes sense to select a distribution with the largest possible value of uncertainty. A natural measure of uncertainty of a probability distribution is its entropy. So, this means that out of all probability distributions consistent with our knowledge, we select the one whose entropy is the largest. In many cases, this works well, but in some cases, this Maximum Entropy approach leads to counterintuitive results. For example, if all we know is that the variable is located on a given interval, then the Maximum Entropy approach selects the uniform distribution on this interval. In this distribution, the probability density ρ(x) abruptly changes at the interval's endpoints, while intuitively, we expect that it should change smoothly with x. To reconcile the Maximum Entropy approach with our intuition, we propose to limit distributions to those for which the probability density's rate of change is bounded by some a priori value -- and to limit the search for the distribution with the largest entropy only to such distributions. We show that this natural restriction indeed reconciles the Maximum Entropy approach with our intuition.
File in pdf
Published in Geombinatorics, 2021, Vol. 30, No. 1, pp. 109-112.
It is known that, in general, a person keeps in mind between 5 and 9 objects -- this is known as the 7 plus minus 2 law. In this paper, we provide a possible simple geometric explanation for this psychological feature.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 52, pp. 93-96.
In many graph-related problems, an obvious necessary condition is often also sufficient. This phenomenon is so ubiquitous that it was even named TONCAS, after the first letters of the phrase describing this phenomenon. In this paper, we provide a possible explanation for this phenomenon.
File in pdf
Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Behaviorial Predictive Modeling in Economics, Springer, Cham, Switzerland, 2021, pp. 141-144.
In the Bayesian approach, to describe a prior distribution on the set [0,1] of all possible probability values, typically, a Beta distribution is used. The fact that there have been many successful applications of this idea seems to indicate that there must be a fundamental reason for selecting this particular family of distributions. In this paper, we show that the selection of this family can indeed be explained if we make reasonable invariance requirements.
File in pdf
Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Behaviorial Predictive Modeling in Economics, Springer, Cham, Switzerland, 2021, pp. 195-201.
In many practical situations, for some components of the uncertainty (e.g., of the measurement error) we know the corresponding probability distribution, while for other components, we know only upper bound on the corresponding values. To decide which of the algorithms or techniques leads to less uncertainty, we need to be able to gauge the combined uncertainty by a single numerical value -- so that we can select the algorithm for which this values is the best. There exist several techniques for gauging the combination of interval and probabilistic uncertainty. In this paper, we consider the problem of gauging the combination of different types of uncertainty from the general fundamental viewpoint. As a result, we develop a general formula for such gauging -- a formula whose particular cases include the currently used techniques.
File in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, and Nguyen Duc Trung (eds.), Data Science for Financial Econometrics, Springer, Cham, Switzerland, 2021, pp. 37-50.
In many practical situations, observations and measurement results are consistent with many different models -- i.e., the corresponding problem is ill-posed. In such situations, a reasonable idea is to take into account that the values of the corresponding parameters should not be too large; this idea is known as regularization. Several different regularization techniques have been proposed; empirically the most successful are LASSO method, when we bound the sum of absolute values of the parameters, and EN and CLOT methods in which this sum is combined with the sum of the squares. In this paper, we explain the empirical success of these methods by showing that they are the only ones which are invariant with respect to natural transformations -- like scaling which corresponds to selecting a different measuring unit.
File in pdf
Published in: Nguyen Ngoc Thach, Vladik Kreinovich, and Nguyen Duc Trung (eds.), Data Science for Financial Econometrics, Springer, Cham, Switzerland, 2021, pp. 115-120.
In many practical situations, we need to select a model based on the data. It is, at present, practically a consensus that the traditional p-value-based techniques for such selection often do not lead to adequate results. One of the most widely used alternative model selection techniques is the Minimum Bayes Factor (MBF) approach, in which a model is preferred if the corresponding Bayes factor -- the ratio of likelihoods corresponding to this model and to the competing model -- is sufficiently large for all possible prior distributions. Based on the MBF values, we can decide how strong is the evidence in support of the selected model: weak, strong, very strong, or decisive. The corresponding strength levels are based on a heuristic scale proposed by Harold Jeffreys, one of the pioneers of the Bayes approach to statistics. In this paper, we propose a justification for this scale.
File in pdf
Published in: Songsak Sriboonchitta, Vladik Kreinovich, and Woraphon Yamaka (eds.), Behaviorial Predictive Modeling in Economics, Springer, Cham, Switzerland, 2021, pp. 145-152.
In many practical situations, we need to make a group decision that takes into account preferences of all the participants. Ideally, we should elicit, from each participant, a full information about his/her preferences, but such elicitation is usually too time-consuming to be practical. Instead, we only elicit, from each participant, his/her ranking of different alternatives. One of the semi-heuristic methods for decision making under such information is Borda count, when for each alternative and each participant, we count how many alternatives are worse, and then select the alternatives for which the sum of these numbers is the largest. In this paper, we explain the empirical success of the Borda count technique by showing that this method naturally follows from the maximum entropy approach -- a natural approach to decision making under uncertainty.
File in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 14, pp. 681-684.
To improve the efficiency of artificial insemination, farmers equip cows with sensors, based on which a computer system determines the cow's insemination window. Analysis of the resulting calves showed an unexpected dependence of the calf's gender on the insemination time: cows inseminated earlier in their window mostly gave birth to female calves, while cows inseminated later in their window mostly gave birth to males. In this paper, we provide a general system-based explanation for this phenomenon.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 52, pp. 27-32.
In their 1999 paper, psychologists David Dunning and Justin Kruger showed that, in general, experts not only provide better estimates of different situations, but they also provide a better estimates of the accuracy of their estimates. Which this phenomenon has been confirmed by many follow-up experiments, it remains largely unexplained. In this paper, we provide a simple system-based qualitative explanation for the Dunning-Kruger effect.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 164-167.
Many computer-based services use recommender systems that predict our preferences based on our degree of satisfaction with the past selections. One of the most efficient techniques making recommender systems successful is matrix factorization. While this technique works well, until now, there was no general explanation of why it works. In this paper, we provide such an explanation.
File in pdf
Published in Geombinatorics, 2019, Vol. 29, No. 2, pp. 78-88.
It is known that seismic waves from a remote earthquake can trigger a small local earthquake. Recent analysis has shown that this triggering occurs mostly when the direction of the incoming wave is orthogonal to the direction of the local fault, some triggerings occur when these directions are parallel, and very few triggerings occur when the angle between these two directions is different from 0 and 90 degrees. In this paper, we propose a symmetry-based geometric explanation for this unexpected observation.
File in pdf
Published in Journal of Uncertain Systems, 2020, Vol. 13, No. 3, pp. 207-210.
It is known that a free neutron decays into a proton, an electron, and an anti-neutrino. Interesting, recent attempts to measure the neutron's lifetime has led to two slightly different estimates: namely, the number of decaying neutrons is somewhat larger than the number of newly created protons. This difference is known as the neutron lifetime puzzle. A natural explanation for this difference is that in some cases, a neutron decays not into a proton, but into some other particle. If this explanation is true, this implies that nuclei with a sufficiently large number of neutrons will be unstable. Based on the observed difference between the two estimates of the neutron lifetime, we can estimate the largest number of neutrons in a stable nucleus to be between 80 and 128. The fact that the number of neutrons (125) in the actual largest stable nucleus (lead) lies within this interval can serve as an additional argument is favor of the current explanation of the neutron lifetime puzzle.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 51, pp. 114-117.
Researchers have found out that normally, we remember about 30% of the information; however, if immediately after reading, we get a test, the rate increases to 45%. In this paper, we show that Zipf law can explain this empirical dependence.
File in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 14, pp. 677-680.
Several researchers found out that acoustic stimulation during sleep enhances sleep and enhances memory. An interesting -- and somewhat mysterious -- part of this phenomenon is that out of all possible types of noise, the pink noise leads to the most efficient stimulation. In this paper, we use general system-based ideas to explain why in this phenomenon, pink noise works best.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 51, pp. 109-113.
We expect that the quality of experts' decisions increases with their experience. This is indeed true for reasonably routine situations. However, surprisingly, empirical data shows that in unusual situations, novice experts make much better decisions than more experience ones. This phenomenon is especially unexpected for medical emergency situations: it turns out that the mortality rate of patients treated by novice doctors is a third lower than for patients treated by experience doctors. In this paper, we provide a possible explanation for this seemingly counterintuitive phenomenon.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 176-179.
Researchers who monitor the average intelligence of human population have reasonably recently made an unexpected observation: that after many decades in which this level was constantly growing (this is known as the Flynn effect), at present, this level has started decreasing again. In this paper, we show that this reversed Flynn effect can be, in principle, explained in general system-based terms: namely, it is similar to the fact that a control system usually overshoots before stabilizing at the desired level. A similar idea may explain another unexpected observation -- that the Universe's expansion rate, which was supposed to be decreasing, is actually increasing.
File in pdf
Published in Journal of Uncertain Systems, 2020, Vol. 13, No. 3, pp. 201-206.
This article summarizes main ideas about education which were presented at the 2019 International Forum on Teacher Education in Kazan, Russia, on May 28--31, 2019.
File in pdf
Published in Asian Journal of Economics and Banking, 2019, Vol. 3, No. 2, pp. 17-28.
The traditional Markowitz approach to portfolio optimization assumes that we know the means, variances, and covariances of the return rates of all the financial instruments. In some practical situations, however, we do not have enough information to determine the variances and covariances, we only know the means. To provide a reasonable portfolio allocation for such cases, researchers proposed a heuristic maximum entropy approach. In this paper, we provide an economic justification for this heuristic idea.
File in pdf
Published in Journal of Uncertain Systems, 2020, Vol. 13, No. 3, pp. 197-200.
Research conferences are often organized by volunteers, namely, by people who are specialists in research but not in conference organization. As a result, some conferences are very successful, while some are not very successful. It is therefore desirable to provide newbie conference organizers with helpful recommendations. In this article, based on the experience of a successful recent conference, we provide the corresponding basic recommendations -- as well as their general-system explanation.
File in pdf
Published in Geombinatorics, 2020, Vol. 29, No. 4, pp. 104-110.
It is empirically known that roads built on clay soils have different nonlinear mechanical properties than roads built on granular soils (such as gravel or sand). In this paper, we show that this difficult-to-explain empirical fact can be naturally explained if we analyze the corresponding geometric symmetries.
File in pdf
Published in Applied Mathematical Sciences
At first glance, it seems that people should be paid in proportion to their contribution, so if one person produces a little more than the other one, he/she should be paid a little more. In reality, however, top performers are paid dis-proportionally more than those whose performance is slightly worse. How can we explain this from an economic viewpoint? We show that actually there is no paradox here: a simple economic analysis shows that in many area, it makes perfect economic sense to pay much more to top performers.
File in pdf
Published in Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence SSCI'2019, Xiamen, China, December 6-9, 2019, pp. 828-832.
One of the main reasons why computations -- in particular, engineering computations -- take long is that, to be on the safe side, models take into account all possible affecting features, most of which turn out to be not really relevant for the corresponding physical problem. From this viewpoint, it is desirable to find out which inputs are relevant. In general, the problem of checking the input's relevancy is itself NP-hard, which means, crudely speaking, that no feasible algorithm can always solve it. Thus, it is desirable to speed up this checking as much as possible. One possible way to speed up such a checking is to use quantum computing, namely, to use the Deutsch-Josza algorithm. However, this algorithm is just a way to solve this problem, it is not clear whether a more efficient (or even different) quantum algorithm is possible for solving this problem. In this paper, we show that the Deutsch-Josza algorithm is, in effect, the only possible way to use quantum computing for checking which inputs are relevant.
Original file UTEP-CS-19-54 in pdf
Revised version UTEP-CS-19-54b in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), How Uncertainty-Related Ideas Can Provide Theoretical Explanation for Empirical Dependencies, Springer, Cham, Switzerland, 2021, pp. 63-67.
Formal implication does not capture the intuitive idea of "if A then B", since in formal implication, every two true statements -- even completely unrelated ones -- imply each other. A more adequate description of intuitive implication happens if we consider how much the use of A can shorten a derivation of B. At first glance, it may seem that the number of bits by which we shorten this derivation is a reasonable degree of implication, but we show that this number is not in good accordance with our intuition, and that a natural formalization of this intuition leads to the need to use, as the desired degree, the ratio between the shortened derivation length and the original length.
File in pdf
Published in Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence SSCI'2019, Xiamen, China, December 6-9, 2019, pp. 814-817.
Deep learning and deep reinforcement learning are, at present, the best available machine learning tools for use in engineering problems. However, at present, the use of these tools is limited by the fact that they are very time-consuming, usually requiring the use of a high performance computer. It is therefore desirable to look for possible ways to speed up the corresponding computations. One of the time-consuming parts of these algorithms is softmax selection, when instead of selecting the alternative with the largest possible value of the corresponding objective function, we select all possible values, with probabilities increasing with the value of the objective function. In this paper, we propose a significantly faster quantum-computing alternative to softmax selection.
Original file UTEP-CS-19-52 in pdf
Revised version UTEP-CS-19-52b in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 12, pp. 585-589.
When people have several possible investment instruments, people often invest equally into these instruments: in the case of n instruments, they invest 1/n of their money into each of these instruments. Of course, if additional information about each instrument is available, this 1/n investment strategy is not optimal. We show, however, that in the absence of reliable information, 1/n investment is indeed the best strategy.
File in pdf
Java and Android applications can be written in the same programming language. Thus, it is natural to ask how much code can be shared between them. In this paper, we perform a case study to measure quantitatively the amount of code that can be shared and reused for a multiplatform application running on the Java platform and the Android platform. We first configure a development environment consisting of platform-specific tools and supporting continuous integration. We then propose a general architecture for a multiplatform application under a guiding design principle of having clearly defined interfaces and employing loose coupling to accommodate platform differences and variations. Specifically, we separate our application into two parts, a platform-independent part (PIP) and a platform-dependent part (PDP), and share the PIP between platform-specific versions. Our key finding is that 37%--40% of code can be shared and reused between the Java and the Android versions of our application. Interestingly, the Android version requires 8% more code than Java due to platform-specific constraints and concerns. We also learned that the quality of an application can be improved dramatically through multiplatform development.
File in PDF
Published in Panos Pardalos, Varvara Rasskazova, and Michael N. Vrahatis (eds.), Black Box Optimization, Machine Learning and No-Free Lunch Theorems, Springer, Cham, Switzerland, 2021, p. 195-220. One of the main objectives of science and engineering is to predict the future state of the world -- and to come up with devices and strategies that would make this future state better. In some practical situations, we know how the state changes with time -- e.g., in meteorology, we know the partial differential equations that describes the atmospheric processes. In such situations, prediction becomes a purely computational problem. In many other situations, however, we do not know the equation describing the system's dynamics. In such situations, we need to learn this dynamics from data. At present, the most efficient way of such learning is to use deep learning -- training a neural network with a large number of layers. To make this idea truly efficient, several trial-and-error-based heuristics were discovered, such as the use of rectified linear neurons, softmax, etc. In this chapter, we show that the empirical success of many of these heuristics can be explained by optimization-under-uncertainty techniques.
File in pdf
Published in: Vladik Kreinovich and Nguyen Hoang Phuong (eds.), Soft Computing for Biomedical Applications and Related Topics, Springer Verlag, Cham, Switzerland, 2021, pp. 67-79.
The more information we have about a quantity, the more accurately we can estimate this quantity. In particular, if we have several estimates of the same quantity, we can fuse them into a single more accurate estimate. What is the accuracy of this estimate? The corresponding formulas are known for the case of probabilistic uncertainty. In this paper, we provide similar formulas for the cases of interval and fuzzy uncertainty.
File in pdf
Published in: Vladik Kreinovich and Nguyen Hoang Phuong (eds.), Soft Computing for Biomedical Applications and Related Topics, Springer Verlag, Cham, Switzerland, 2021, pp. 61-65.
At present, one of the main ways to gauge the quality of a researcher is to use his or her h-index, which is defined as the largest integer n such that the researcher has at least n publications each of which has at least n citations. The fact that this quantity is widely used indicates that h-index indeed reasonably adequately describes the researcher's quality. So, this notion must capture some intuitive idea. However, the above definition is not intuitive at all, it sound like a somewhat convoluted mathematical exercise. So why is h-index so efficient? In this paper, we use known mathematical facts about h-index -- in particular, the results of its fuzzy-related analysis -- to come up with an intuitive explanation for the h-index's efficiency.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 216-219.
Machine learning techniques have been very efficient in many applications, in particular, when learning to classify a given object to one of the given classes. Such classification problems are ubiquitous: e.g., in medicine, such a classification corresponds to diagnosing a disease, and the resulting tools help medical doctors come up with the correct diagnosis. There are many possible ways to set up the corresponding neural network (or another machine learning technique). A direct way is to design a single neural network with as many outputs as there are classes -- so that for each class i, the system would generate a degree of confidence that the given object belongs to this class. Instead of designing a single neural network, we can follow a hierarchical approach corresponding to a natural hierarchy of classes: classes themselves can usually be grouped into a few natural groups, each group can be subdivided into subgroups, etc. So, we set up several networks: the first classifies the object into one of the groups, then another one classifies it into one of the subgroups, etc., until we finally get the desired class. From the computational viewpoint, this hierarchical scheme seems to be too complicated: why do it if we can use a direct approach? However, surprisingly, in many practical cases, the hierarchical approach works much better. In this paper, we provide a possible explanation for this unexpected phenomenon.
File in pdf
Published in Geombinatorics
, 2023, Vol. 33, No. 2, pp. 70-76.In this paper, we show that many aspects of complex biological processes related to wound healing can be explained in terms of the corresponding geometric symmetries.
File in pdf
Published in: Vladik Kreinovich and Nguyen Hoang Phuong (eds.), Soft Computing for Biomedical Applications and Related Topics, Springer Verlag, Cham, Switzerland, 2021, pp. 49-59.
It is well known that the traditional 2-valued logic is only an approximation to how we actually reason. To provide a more adequate description of how we actually reason, researchers proposed and studied many generalizations and modifications of the traditional logic, generalizations and modifications in which some rules of the traditional logic are no longer valid. Interestingly, for some of such rules (e.g., for law of excluded middle), we have a century of research in logics that violate this rule, while for others (e.g., commutativity of ``and''), practically no research has been done. In this paper, we show that fuzzy ideas can help explain why some non-classical logics are more studied and some less studied: namely, it turns out that most studied are the violations which can be implemented by the simplest expressions (specifically, by polynomials of the lowest order).
File in pdf
Published in Proceedings of the 2019 IEEE Symposium Series on Computational Intelligence SSCI'2019, Xiamen, China, December 6-9, 2019, pp. 881-884.
In many practical situation, we are interesting in values of cumulative quantities -- e.g., quantities that describe the overall quality of a long road segment. Some of these quantities we can measure, but measuring such quantities requiring measuring many local values and is, thus, expensive and time-consuming. As a result, in many cases, instead of the measurement, we reply on expert estimating such cumulative quantities on a scale, e.g., from 0 to 5. Researchers have come up with an empirical formula that provides a relation between the measurement result and a 0-to-5 expert estimate. In this paper, we provide a theoretical explanation for this empirically efficient formula.
Original file UTEP-CS-19-43 in pdf
Original file UTEP-CS-19-43b in pdf
Software developers of today are under increasing pressure to support multiple platforms, in particular mobile platforms. However, developing a multiplatform application is difficult and challenging due to a variety of platform differences. We propose a native approach for developing a multiplatform application running on two similar but different platforms, Java and Android. We address practical software engineering concerns attributed to native multiplatform application development, from configuration of tools to software design and development process. Our approach allows one to share 37%~40% of application code between the two platforms as well as improving the quality of the application. We believe our approach can also be adapted to transforming existing Java applications to Android applications.
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 51, pp. 105-108.
Empirical studies show that when a medical doctor prescribes a medicine, only two third of the patients fill the prescription, and of this filling-prescription group, only half follow the doctor's instructions when taking the medicine. In this paper, we show that a general systems approach -- namely, abstracting from the specifics of this situation -- helps explain these empirical observations. We also mention that systems approach can not only explains this problem, it can also help solve it -- i.e., it can help increase the patients' adherence to the doctors' recommendations.
File in pdf
Published in Axioms, 2019, Vol. 2019, No. 8, Paper 95.
The main ideas of F-transform came from representing expert rules. It would be therefore reasonable to expect that the more accurately the membership functions describe human reasoning, the more successful will be the corresponding F-transform formulas. We know that an adequate description of our reasoning corresponds to complicated membership functions -- however, somewhat surprisingly, many successful applications of F-transform use the simplest possible triangular membership functions. There exist some explanations for this phenomenon, which are based on local behavior of the signal. In this paper, we supplement these local explanations by a global one: namely, we prove that triangular membership functions are the only one that provide the exact reconstruction of the appropriate global characteristic of the signal.
Original file UTEP-CS-19-40 in pdf
Revised version UTEP-CS-19-40b in pdf
Published in Axioms, 2019, Vol. 2019, No. 8, Paper 94.
In many application problems, F-transform algorithms are very efficient. In F-transform techniques, we replace the original signal or image with a finite number of weighted averages. The use of weighted average can be naturally explained, e.g., by the fact that this is what we get anyway when we measure the signal. However, most successful applications of F-transform have an additional not-so-easy-to-explain feature: the fuzzy partition requirement, that the sum of all the related weighting functions is a constant. In this paper, we show that this seemingly difficult-to-explain requirement can also be naturally explained in signal-measurement terms: namely, this requirement can be derived from the natural desire to have all the signal values at different moments of time estimated with the same accuracy. This explanation is the main contribution of this paper.
Original file UTEP-CS-19-39 in pdf
Revised version UTEP-CS-19-39b in pdf
Published in Proceedings of the XXII International Conference on Soft Computing and Measurements SCM'2019, St. Petersburg, Russia, May 23-25, 2019, pp. 7-12.
In many practical situations, it is not realistically possible to directly measure the desired physical quantity. In such situations, we have to measure this quantity indirectly, i.e., measure related quantities and use the known relation to estimate the value of the desired quantity. How accurate it the resulting estimate? The traditional approach assumes that the measurement errors of all direct measurements are independent. In many practical situations, this assumption works well, but in many other practical situations, it leads to a drastic underestimation of the resulting estimation error: e.g., when we base our estimate on measurements performed at nearby moments of time, since there is usually a strong correlation between the corresponding measurement errors. An alternative approach is when we make no assumptions about dependence. This alternative approach, vice versa, often leads to a drastic overestimation of the resulting estimation error. To get a more realistic estimate, it is desirable to take into account that while on the local level, we may have correlations, globally, measurement errors are usually indeed independent - e.g., measurements sufficiently separated in time and/or space. In this paper, we show how to analyze such situations by combining Monte-Carlo techniques corresponding to both known approaches. On the geophysical example, we show that this combination indeed leads to realistic estimates.
File in pdf
Published in: Roman Wyrzykowski, Ewa Deelman, Jack Dongarra, and Konrad Karczewski (eds.), Proceedings of the International Conference on Parallel Processing and Applied Mathematics PPAM'2019, Bialystok, Poland, September 8-11, 2019, Springer, 2020, Vol. II, pp. 364-373.
One of the important parts of deep learning is the use of the softmax formula, that enables us to select one of the alternatives with a probability depending on its expected gain. A similar formula describes human decision making: somewhat surprisingly, when presented with several choices with different expected equivalent monetary gain, we do not just select the alternative with the largest gain; instead, we make a random choice, with probability decreasing with the gain -- so that it is possible that we will select second highest and even third highest value. Both formulas assume that we know the exact value of the expected gain for each alternative. In practice, we usually know this gain only with some certainty. For example, often, we only know the lower bound L and the upper bound U on the expected gain, i.e., we only know that the actual gain g is somewhere in the interval [L,U]. In this paper, we show how to extend softmax and discrete choice formulas to interval uncertainty.
Original file UTEP-CS-19-37 in
pdf
Updated version UTEP-CS-19-37a in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
The educational landscape is becoming a digital learning environment. Students in today's digital world draw from multiple sources of information; from hypertext, videos, social media, to video games and internet searches (Luke, 2005). Emerging bilinguals, individuals learning two languages at once, who use software written in English have a passive relationship with the computer when software is not in their native language. They feel that this educational software belongs to another culture. This paper will present findings from a study with emergent bilinguals' engagement in a fully online pre-calculus course. The authors utilized the Cultural-Historical Activity Theory to describe how emergent bilinguals (perspective teachers) created authentic bilingual learning environments and improved their self-efficacy for mathematics. This study utilized Activity Theory to explicate the complex digital practices of emergent bilinguals while engaged in an online mathematics course. This mixed methods study was conducted over four semesters at a university on the U.S.-Mexico border. Data collected from demographic survey, class forum questions, daily logs with snapshots, self-efficacy surveys, and emails as well as face-to-face interviews, was analysed through a constant comparison method. Two tensions emerged from the findings, the importance of learning English and encountering unfamiliar Spanish dialects or translations. The r esults of this study demonstrated that emergent bilinguals mediated several forms of translators and culturally relevant videos for meaning making and to make cognitive connections with the topics in an online mathematics course. They further developed agency in creating an equitable educational digital space where they developed mathematical biliteracy.
File in pdf
To appear in: Evgeny Dantsin and Vladik Kreinovich (eds.), Uncertainty Quantification and Uncertainty Propagation under Traditional and AI-Based Data Processing (and Related Topics): Legacy of Grigory Tseytin, Springer, Cham, Switzerland.
Many researchers have been analyzing how to further improve teacher preparation -- and thus, how to improve teaching. Many of their results are based on complex models and/or on complex data analysis. Because of this complexity, future teachers often view the resulting recommendations as black boxes, without understanding the motivations for these recommendations -- and thus, without much willingness to follow these recommendations. One of the natural ways to make these recommendations clearer is to reformulate them in geometric terms, since geometric models are usually easier to understand than algebraic more abstract ones. In this paper, on the example of two pedagogical recommendations related to the other in which material is represented, we show that the motivations of these recommendations can indeed be described in geometric terms. Hopefully, this will make teachers more willing to follow these recommendations.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 224-228.
In 1977, the renowned physicist Victor Weisskopf challenged the physics community to provide a fundamental explanation for the existence of the liquid phase of matter. A recent essay confirms that Weisskopf's 1977 question remains a challenge. In this paper, we use natural symmetry ideas to show that liquids are actually a natural state between solids and gases.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 229-232
In the US education system, instructors typically use 90/100, 80/100, 70/100, and 60/100 thresholds to gauge students' knowledge: students who get 90 or more points out of 100 get the highest grade of A, students whose grades are in between 80 and 90 get a B, followed by C, D, and F (fail). In this paper, we show that these seemingly arbitrary threshold have a natural explanation: A means that a student can solve almost all problems; C means that two students with this level of knowledge can solve almost all problems when working together; D means that we need at least three such students; and B means that, working jointly with a D student, they can solve almost all problems.
File in pdf
Published in Physics of Life Reviews, 2019, Vol. 29, pp. 93-95.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 3, pp. 189-192.
According to a naive understanding of economic behavior, for each object, we should have an internal estimate of how much this object is worth for us. If anyone offers us to buy this object at a smaller amount, we should agree, and if anyone offers to buy it from us for a larger amount, we should agree as well. In practice, however, contrary to this understanding, the price for which we are willing to buy and the price at which we are willing to sell are often different. In this paper, we show that this seemingly counterintuitive phenomenon can be explained within decision theory -- if we use the standard Hurwicz optimism-pessimism recommendations for decision making under uncertainty.
File in pdf
Published in Computacion y Sistemas, 2019, Vol. 23, No. 4, pp. 1569-1573.
The paper proposes a new class of fuzzy set similarity measures taking into account the proximity of membership values to the border values 0 and 1. These similarity measures take values in [0,1] and generalize the crisp weak equality relation of fuzzy sets considered in the theory of fuzzy sets. The method of construction of a contrast similarity measure using a bipolar function symmetric with respect to 0.5 is presented. The similarity measure defined by the contrast intensification operation considered by Lotfi Zadeh is discussed.
Original file UTEP-CS-19-30 in pdf
Revised version UTEP-CS-19-30a in pdf
Published in Applied Mathematical Sciences, 2019, Vol. 13, No. 8, pp. 397-404.
One of the challenges in foundations of finance is the so-called "no trade theorem" paradox: if an expert trader wants to sell some stock, that means that this trader believes that this stock will go down; however, the very fact that another expert trader is willing to buy it means that this other expert believes that the stock will go up. The fact that equally good experts have different beliefs should dissuade the first expert from selling -- and thus, trades should be very rare. However, in reality, trades are ubiquitous. In this paper, we show that a detailed application of decision theory solves this paradox and explains how a trade can be beneficial to both seller and buyer. This application also explains a known psychological fact -- that depressed people are usually more risk-averse.
File in pdf
While many aspects of speech processing, including speech recognition and speech synthesis, have seen enormous advances over the past few years, advances in dialog have been more modest. This difference is largely attributable to the lack of resources that can support machine learning of dialog models and dialog phenomena. The research community accordingly needs a corpus of spoken dialogs with quality annotations every 100 milliseconds or so. We envisage a large and diverse collection: on the order of fifty hours of data, representing hundreds of speakers and many genres, with every instant labeled for interaction quality by one or more human judges. To make it maximally useful, its design will be a community effort.
This technical report is an edited version of a National Science Foundation proposal, submitted to the CISE Community Research Infrastructure Program in February 2019.
File in pdf
Published in: Martine Ceberio and Vladik Kreinovich (eds.), Decision Making under Constraints, Springer Verlag, Cham, Switzerland, 2020, pp. 7-14.
It is known that the use of quantum computing can reduced the time needed for a search in an unsorted array: from the original non-quantum time T to a much smaller quantum computation time Tq proportional to the square root √(T) of T. In this paper, we show that for a continuous optimization problem, with quantum computing, we can reach almost the same speed-up: namely, we can reduce the non-quantum time T to a much shorter quantum computation time √(T) * ln(T).
File in pdf
Published in Mathematical Structures and Modeling, 2019, Vol. 51, pp. 97-104.
Wavelets of different shapes are known to be very efficient in many data processing problems. In many engineering applications, the most efficient shapes are shapes of a generalized harmonic wavelet, i.e., a wavelet of the shape w(t) = ta * exp(b * t) for complex b. Similar functions are empirically the most successful in the seismic analysis -- namely, in simulating the earthquake-related high-frequency ground motion. In this paper, we provide a theoretical explanation for the empirical success of these models.
File in pdf
Published in: In: Michael Beer and Enrico Zio (eds.) Proceedings of the 29th European Safety and Reliability Conference ESREL'2019, Hannover, Germany, September 22-26, 2019, pp. 1560-1565.
Earthquakes can lead to a huge damage -- and the big problem is that they are very difficult to predict. To be more precise, it is very difficult to predict the time of a future earthquake. However, we can estimate which earthquake locations are probable. In general, earthquakes are mostly concentrated around the corresponding faults. For some faults, all the earthquakes occur in a narrow vicinity of the fault, while for other faults, areas more distant from the fault are risky as well. To properly estimate the earthquake's risk, it is important to understand when this risk is limited to a narrow vicinity of a fault and when it not thus limited.
This problem has been thoroughly studied for the most well-studied fault in the world: San Andreas fault. This fault consists of somewhat different Northern and Southern parts. The Northern part is close to a straight line, and in this part, earthquake are mostly limited to a narrow vicinity of this line. In contrast, the Southern part is different: it is curved, and earthquakes can happen much further from the main fault. In this paper, we provide a general general explanation for this phenomenon. The existence of such a general explanations makes us expect that the same phenomenon will be observed at other not-so-well-studied faults as well.
Original file UTEP-CS-19-25 in pdf
Updated version UTEP-CS-19-25a in pdf
Published in: Michael Beer and Enrico Zio (eds.) Proceedings of the 29th European Safety and Reliability Conference ESREL'2019, Hannover, Germany, September 22-26, 2019, pp. 3164-3168.
To guarantee reliability and safety of engineering structures, we need to regularly measure their mechanical properties. Such measurements are often expensive and time-consuming. It is therefore necessary to carefully plan the corresponding measurement experiments, to minimize the corresponding expenses.
It is known that, in general, experiment design is NP-hard. However, the previous proofs dealt either with nonlinear systems, or with situations with low measurement accuracy. In civil engineering, however, most systems are well-described by linear systems, and measurements are reasonably accurate. In this paper, we show that experiment design is NP-hard even for civil engineering problems. We show that even checking whether the results of the previous measurements are sufficient to determine the value of the desired mechanical quantity -- or additional measurement are needed -- even this problem is, in general, NP-hard. So, crudely speaking, no feasible algorithm can always answer this question -- and thus, overspending on measurements is inevitable.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 122-125.
The standard way to describing the road's roughness it to use a single numerical characteristics called International Roughness Index (IRI). This characteristic describes the effect of the road roughness on a vehicle of standard size. To estimate IRI, practitioners tried to use easily available vehicles (whose size may be somewhat different) and then estimate IRI based on these different-size measurements. The problem is that the resulting estimates of IRI are very inaccurate -- which means that a single numerical characteristic like IRI is not sufficient to properly describe road roughness. In this paper, we show that the road roughness can be described by a fractal (power law) model. As a result, we propose to supplement IRI with another numerical characteristic: the power-law exponent that describes how the effect of roughness changes when we change the size of the vehicle.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 114-117.
Uses of computer-based based systems often want perfect reproducibility: when encountering the same situation twice, the system should exhibit the same behavior. For real-life systems that include sensors, this is not always possible: due to inevitable measurement uncertainty, for the same actual value of the corresponding quantity, we may get somewhat different measurement results, and thus, show somewhat different behavior. In this paper, we show that the above-described ideal reproducibility is not possible even in the idealized situation, when we assume that a sensor can perform its measurement with any given accuracy.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 138-141.
In many practical situations, the information comes not in terms of the original image or signal, but in terms of its Fourier transform. To detect complex features based on this information, it is often necessary to use machine learning. In the Fourier transform, usually, there are many components, and it is not easy to use all of them in machine learning. So, we need to select the most informative components. In this paper, we provide general recommendations on how to select such components. We also show that these recommendations are in good accordance with two examples: the structure of the human color vision, and classification of lung dysfunction in children.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 104-108.
In many practical situations, we need to make a decision under interval or set uncertainty: e.g., we need to decide how much we are willing to pay for an option that will bring us between $10 and $40, i.e., for which the set of possible gains is the interval S = [10,40]. To make such decisions, researcher have used the idea of additivity: that if have two independent options, then the price we pay for both should be equal to the sum of the prices that we pay for each of these options. It is known that this requirement enables us to make decisions for bounded closed sets S. In some practical situations, the set S of possible gains is not closed: e.g., we may know that the gain will be between $10 and $40, but always greater than $10 and always smaller than $40. In this case, the set of possible values in an open interval S = (10,40). In this paper, we show how to make decisions in situations of general -- not necessarily closed -- set uncertainty.
File in pdf
Published in: Shahnaz N. Shahbazova, Ali M. Abbasov, Vladik Kreinovich, Janusz Kacprzyk, and Ildar Batyrshin (eds.), " Recent Developments and the New Directions of Research, Foundations, and Applications", Springer, Cham, Switzerland, 2023, Vol. 2, pp. 171-192.
To adequately treat different types of lung dysfunctions in children, it is important to properly diagnose the corresponding dysfunction, and this is not an easy task. Neural networks have been trained to perform this diagnosis, but they are not perfect in diagnostics: their success rate is 60%. In this paper, we show that by selecting an appropriate invariance-based pre-processing, we can drastically improve the diagnostic success, to 100% for diagnosing the presence of a lung dysfunction.
Original file UTEP-19-19 in pdf
Updated version UTEP-CS-19-19d in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 126-132.
The traditional fuzzy logic does not distinguish between the cases when we know nothing about a statement S and the cases when we have equally convincing arguments for S and for its negation ~S: in both cases, we assign the degree 0.5 to such a statement S. This distinction is provided by intuitionistic fuzzy logic, when to describe our degree of confidence in a statement S, we use two numbers a+ and a− that characterize our degree of confidence in S and in ~S. An even more detailed distinction is provided by picture fuzzy logic, in which we differentiate between cases when we are still trying to understand the truth value and cases when we are no longer interested. The question is how to extend "and"- and "or"-operations to these more general logics. In this paper, we provide a general idea for such extension, an idea that explain several extensions that have been proposed and successfully used.
File in pdf
Published as a chapter in Griselda Valdepenas Acosta, Eric D. Smith, and Vladik Kreinovich, Towards Analytical Techniques for Systems Engineering Applications, Springer Verlag, Cham, Switzerland, 2020.
In many situations like driving, it is important that a person concentrates all his/her attention at a certain critical task -- e.g., watching the road for possible problems. Because of this need to maintain high level of attention, it was assumed, until recently, that in such situations, the person maintains a constantly high level of attention (of course, until he or she gets tired). Interestingly, recent experiments showed that in reality, from the very beginning, attention level oscillates. In this paper, we show that such an oscillation is indeed helpful -- and thus, it is necessary to emulate such an oscillation when designing automatic systems, e.g., for driving.
File in pdf
Published in Proceedings of the Joint 11th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2019 and International Quantum Systems Association (IQSA) Workshop on Quantum Structures, Prague, Czech Republic, September 9-13, 2019.
Need for faster and faster computing necessitates going down to quantum level -- which means involving quantum computing. One of the important features of quantum computing is that it is reversible. Reversibility is also important as a way to decrease processor heating and thus, enable us to place more computing units in the same volume. In this paper, we argue that from this viewpoint, interval uncertainty is more appropriate than the more general set uncertainty -- and, similarly, that fuzzy numbers (for which all alpha-cuts are intervals) are more appropriate than more general fuzzy sets. We also explain why intervals (and fuzzy numbers) are indeed ubiquitous in applications.
Original file UTEP-CS-19-16 in pdf
Updated version UTEP-CS-19-16a in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 100-103.
People usually underestimate time passed since distant events, and overestimate time passed since recent events. There are several explanations for this "telescoping effect", but most current explanations utilize specific features of human memory and/or human perception. We show that the telescoping effect can be explained on a much basic level of decision theory, without the need to invoke any specific ways we perceive and process time.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 94-99.
Students feel more comfortable with rational numbers than with irrational ones. Thus, when teaching the beginning of calculus, it is desirable to have examples of simple problems for which both zeros and extrema point are rational. Recently, an algorithm was proposed for generating cubic polynomials with this property. However, from the computational viewpoint, the existing algorithm is not the most efficient one: in addition to applying explicit formulas, it also uses trial-and-error exhaustive search. In this paper, we propose a computationally efficient algorithm for generating all such polynomials: namely, an algorithm that uses only explicit formulas.
Published version has typos, corrected version is in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 142-146.
At first glance, it may seem that revolutions happen when life becomes really intolerable. However, historical analysis shows a different story: that revolutions happen not when life becomes intolerable, but when a reasonably prosperous level of living suddenly worsens. This empirical observation seems to contradict traditional decision theory ideas, according to which, in general, people's happiness monotonically depends on their level of living. A more detailed model of human behavior, however, takes into account not only the current level of living, but also future expectations. In this paper, we show that if we properly take these future expectations into account, then we get a natural explanation of the revolution phenomenon.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 89-93.
A recent study of chimpanzees has shown that on the individual basis, they are, surprisingly, much better than humans in simple tasks requiring intelligence and memory. A usual explanation -- called cognitive tradeoff -- is that a human brain has sacrificed some of its data processing (computation) abilities in favor of enhancing the ability to communicate; as a result, while individual humans may not be as smart as possible, jointly, we can solve complex problems. A similar cognitive tradeoff phenomenon can be observed in computer clusters: the most efficient computer clusters are not formed from the fastest, most efficient computers, they are formed from not-so-fast computers which are, however, better in their communication abilities than the fastest ones. In this paper, we propose a simple model that explains the cognitive tradeoff phenomenon.
File in pdf
Published in Proceedings of the World Congress of the International Fuzzy Systems Association and the Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2019,, Lafayette, Louisiana, June 18-22, 2019, pp. 746-751.
Nobel-prize winning physicist Lev Landau liked to emphasize that logarithms are not infinity -- meaning that from the physical viewpoint, logarithms of infinite values are not really infinite. Of course, from a literally mathematical viewpoint, this statement does not make sense: one can easily prove that logarithm of infinity is infinite. However, when a Nobel-prizing physicist makes a statement, you do not want to dismiss it, you want to interpret it. In this paper, we propose a possible physical explanation of this statement. Namely, in physics, nothing is really infinite: according to modern physics, even the Universe is finite in size. From this viewpoint, infinity simply means a very large value. And here lies our explanation: while, e.g., the square of a very large value is still very large, the logarithm of a very large value can be very reasonable -- and for very large values from physics, logarithms are indeed very reasonable.
Original file UTEP-CS-19-11 in pdf
Updated version UTEP-CS-19-11a in pdf
Published in Proceedings of the World Congress of the International Fuzzy Systems Association and the Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2019,, Lafayette, Louisiana, June 18-22, 2019, pp. 106-112.
There are many different independent factors that affect student grades. There are many physical situations like this, in which many different independent factors affect a phenomenon, and in most such situations, we encounter normal distribution -- in full accordance with the Central Limit Theorem, which explains that in such situations, distribution should be close to normal. However, the grade distribution is definitely not normal -- it is multi-modal. In this paper, we explain this strange phenomenon, and, moreover, we explain several observed features of this multi-modal distribution.
Original file UTEP-CS-19-10 in pdf
Updated version UTEP-CS-19-10a in pdf
Published in Proceedings of the World Congress of the International Fuzzy Systems Association and the Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2019,, Lafayette, Louisiana, June 18-22, 2019, pp. 113-120.
In the non-fuzzy (e.g., interval) case, if two expert's opinions are consistent, then, as the result of fusing the knowledge of these two experts, we take the intersection of the two sets (e.g., intervals) describing the expert's opinions. In the experts are inconsistent, i.e., if the intersection is empty, then a reasonable idea is to assume that at least of these experts is right, and thus, to take the union of the two corresponding sets. In practice, expert opinions are often imprecise; this imprecision can be naturally described in terms of fuzzy logic -- a technique specifically designed to describe such imprecision. In the fuzzy case, expert opinions are not always absolutely consistent or absolutely inconsistent, they may be consistent to a certain degree. In this case, we show how the above natural idea of fusing expert opinions can be extended to the fuzzy case. As a result, we, in general, get not "and" (which would correspond to the intersection), not "or" (which would correspond to the union), but rather an appropriate fuzzy combination of "and"- and "or"-operations.
Original file UTEP-CS-19-09 in pdf
Updated version UTEP-CS-19-09b in pdf
Published in Proceedings of the 12th International Workshop on Constraint Programming and Decision Making CoProd'2019, Part of the World Congress of the International Fuzzy Systems Association and the Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2019, Lafayette, Louisiana, June 17, 2019, pp. 813-819.
Louisville-Bratu-Gelfand equation appear in many different physical situations ranging from combustion to explosions to astrophysics. The fact that the same equation appears in many different situations seems to indicate that this equation should not depend on any specific physical process, that it should be possible to derive it from general principles. This is indeed what we show in this paper: that this equation can be naturally derived from basic symmetry requirements.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 118-121.
To make expert estimates of pavement condition more accurate, the American Society for Testing and Materials (ASTM) split one of the original pavement distress categories, for which experts previously provided a single numerical estimate, into two subcategories to be estimated separately. While this split has indeed made expert estimates more accurate, there is a problem: to get a good understanding of the road quality, we would like to see how this quality changed over time, and it is not easy to compare past estimates (based on the old methodology) with the new estimates, which are based on the new after-split methodology. In this paper, we show that a linear calibration reduced disagreement between these two types of estimates -- and thus, leads to a more adequate picture of how the road quality changes with time.
File in pdf
Published in Journal of Uncertain Systems, 2019, Vol. 13, No. 2, pp. 84-88.
When designing a system, we need to perform testing and checking on all levels of the system hierarchy, from the most general system level to the most detailed level. Our resources are limited, so we need to find the best way to allocate these resources, i.e., we need to decide how much efforts to use of each of the levels. In this paper, we formulate this problem in precise terms, and provide a solution to the resulting optimization problem.
File in pdf
Published in Proceedings of the Joint 11th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2019 and International Quantum Systems Association (IQSA) Workshop on Quantum Structures, Prague, Czech Republic, September 9-13, 2019.
One of the fundamental problems of modern physics is the problem of divergence: e.g., when we try to compute the overall energy of the electric field generated by a charged elementary particle, we get a physically meaningless infinite value. In this paper, we show that one way to avoid these infinities is to take into account that measurements are always imprecise -- and thus, we never get the exact values of the physical quantities, only intervals of possible values. We also show that 3-dimensional space is the simplest one in which such interval uncertainty is inevitable. This may explain why the physical space is (at least) 3-dimensional.
Original file UTEP-CS-19-05 in pdf
Updated version UTEP-CS-19-05b in pdf
Published in Proceedings of the Joint 11th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'2019 and International Quantum Systems Association (IQSA) Workshop on Quantum Structures, Prague, Czech Republic, September 9-13, 2019.
Many practical problems necessitate faster and faster computations. Simple physical estimates show that eventually, to move beyond a limit caused by the speed of light restrictions on communication speed, we will need to use quantum -- or, more generally, reversible -- computing. Thus, we need to be able to transform the existing algorithms into a reversible form. Such transformation schemes exist. However, such schemes are not very efficient. Indeed, in general, when we write an algorithm, we composed it of several pre-existing modules. It would be nice to be able to similarly compose a reversible version of our algorithm from reversible versions of these moduli -- but the existing transformation schemes cannot do it, they require that we, in effect, program everything ``from scratch''. It is therefore desirable to come up with alternative transformations, transformation that transform compositions into compositions and thus, transform a modular program in an efficient way -- by utilizing transformed moduli. Such transformations are proposed in this paper.
Original file UTEP-CS-19-04b in pdf
Updated version UTEP-CS-19-04b in pdf
Published in Journal of Economics and Banking, 2019, Vol. 3, No. 1, pp. 19-36.
In this paper, we show that many semi-heuristic econometric formulas can be derived from the natural symmetry requirements. The list of such formulas includes many famous formulas provided by Nobel-prize winners, such as Hurwicz optimism-pessimism criterion for decision making under uncertainty, McFadden's formula for probabilistic decision making, Nash's formula for bargaining solution -- as well as Cobb-Douglas formula for production, gravity model for trade, etc.
File in pdf
Published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2019, New Orleans, Louisiana, June 23-26, 2019, pp. 790-794.
Fuzzy logic is normally used to describe the uncertainty of human knowledge and human reasoning. Physical phenomena are usually described by probabilistic models. In this paper, we show that in extremal conditions, when the concentrations are very large, some formulas describing physical interactions become fuzzy-type. We also show the observable consequences of such fuzzy-type formulas: they lead to bursts of gravitational waves.
Original file UTEP-CS-19-02 in pdf
Updated version UTEP-CS-19-02b in pdf
Published in International Mathematical Forum, 2019, Vol. 14, No. 1, pp. 11-16.
One of the main problems of autistic children is that it is very difficult for them to switch to a different state, to a different activity -- and such switches are often needed. Researchers have recently shown that bilingualism helps autistic children function: namely, it is somewhat easier for bilingual children to switch to a new activity. In this paper, we provide a possible explanation for this empirical phenomenon. Namely, we show that, in general terms, autism means difficulty with breaking symmetries of a state, and we describe how this general reformulation indeed explains the above recently discovered phenomenon.
File in pdf