In many application areas, there are skilled experts who excel in control and decision making. It is desirable to come up with an automated system that would use their skills to help others make similarly good decisions. Often, the experts can only formulate their skills in terms of rules that use imprecise (``fuzzy") words from natural language like "small". To transform these fuzzy rules into a precise control strategy, Zadeh designed special technique that he called fuzzy. This technique was later improved -- e.g., by adding explicit information about what the experts consider not a good control; such addition is known as intuitionistic fuzzy technique. The problem that we consider in this paper in that in many cases, in addition to the fuzzy expert rules, we also have some extra knowledge about the function u(x) -- the function that describes what control to apply for a given input x. For example, it is often reasonable to require that this function is increasing: the larger x, the more control we should apply. In this paper, we use the general decision theory technique to show how this additional information can be incorporated into fuzzy and intuitionistic fuzzy control.
File UTEP-CS-25-46 in pdf
The traditional statistical approach to hypothesis testing is based on the idea that events with very small probability cannot happen. The problem is that the usual naive formalization of this approach is, in general, inconsistent. This inconsistency, in its turn, often leads to irreproducibility and inadequacy of the results. In this paper, we show a new understanding of randomness -- known as Algorithmic Informal Theory -- can help resolve these problems. By the way, this new approach was pioneered, in the mid-1960s, by Andrei Kolmogorov, the same mathematician who, in the mid 1930s, pioneered a formalization of probability theory.
File UTEP-CS-25-45 in pdf
File UTEP-CS-25-44 in pdf
One of the most important techniques in deep learning applications is the attention technique. In this paper, we provide a theoretical explanation for the main empirical formula of attention.
File UTEP-CS-25-43 in pdf
Recent studies have shown that a short burst of intensive physical activity is better for a person's health than the same amount of activity spread over time. In this paper, we provide a theoretical explanation for this empirical fact.
File UTEP-CS-25-42 in pdf
It is well known that logical implication does not always reflect the commonsense understanding of if-then statements. For example, statements like "if 2 + 2 = 5 then witches are flying" are absolutely correct from the viewpoint of formal logic, but make no sense from the commonsense viewpoint. In general, it is difficult to describe commonsense implication in precise terms. This difficulty remains when, instead of considering whether an if-then statement is true or not, we try to estimate the probability that a given if-then statement is true. A recent paper has proposed an empirical formula that captures this commonsense probability reasonably well. In this paper, we provide a theoretical explanation for this empirical formula.
File UTEP-CS-25-41 in pdf
Computers consume 10% of the world's energy, and this proportion continues to increase. One of the main reasons for this energy consumption is that computations use irreversible processes -- e.g., an "and"-gate is irreversible -- and, according to thermodynamics, this leads to energy consumption. To decrease the energy use, a natural idea is to use reversible computing -- of which quantum computing is an important case. It is known how to make deterministic computations reversible. However, since many real-life processes are random, it is often important to simulate random processes as well. In this paper, we analyze how to make simulations of random processes reversible. It turns out that this can be done by using a known notion of local time of a random process.
File UTEP-CS-25-40 in pdf
Analysis of the notion of love from the decision theory viewpoint has revealed paradoxical situations, when strong positive emotions about others can make people unhappy. Interestingly, many religious communities seem to avoid this negative effect. In this paper, we provide a possible explanation for this avoidance.
File UTEP-CS-25-39 in pdf
In principle, there can be many different characteristics of classification quality. However, in practice, mostly the following three characteristics are used: precision, recall, and accuracy -- as well as their combinations like F1. In this paper, we use the basic decision theory to explain why these three characteristics are most frequently used.
File UTEP-CS-25-38 in pdf
While modern computers are fast, their speed is not sufficient for many practical problems. Thus, we need faster computers. A significant part of computation time is spent of moving data from one location to the other -- e.g., from a memory cell ot the processor. According to relativity theory, all communications are limited by the speed of light -- and computer communications are already close to that limit speed. Thus, the only way to further speed up computations is to decrease the size of a memory cell and of other components of a computer, The ultimate decrease is when we use the smallest possible object as a memory cell -- an elementary particle. For this purpose, we cannot use any of the known elementary particles, since they have a single stable state -- while to store a bit, we need an object with two stable states that would represent 0 and 1. Recently, researchers came us with a suggestion that there are elementary particles that have two stable states; such particles are known as {\it paraparticles}. Thus, paraparticles can serve as bit-storing elements -- and so, their use can speed computations. In this paper, we show that paraparticles can speed computations even further -- namely, they may be able to help us solve known NP-hard problems in feasible time.
File UTEP-CS-25-37 in pdf
In this paper, we show that because of natural space-time symmetries, acceptable micro-violations of causality inevitably lead to the possibility of macro-scale causality violations. We also discuss how such macro-scale violations can be potentially detected.
File UTEP-CS-25-36 in pdf
It is known that subjective perception of time is biased: the perceived time is, in general, different from the physical time. Empirical studies have shown that in the first approximation, the dependence of subjective time on physical time is described reasonably well by a power law -- and even better by a logarithm. In this paper, we show that such a dependence is indeed optimal with respect to any reasonable optimality criterion. Since we humans are a product of billions of years of optimizing evolution, this proven optimality explains why these dependencies reasonably accurately describe our time perception. We also explain why we cannot remember early childhood events.
File UTEP-CS-25-35 in pdf
Published in Proceedings of the 17th International Workshop on Constraint Programming and Decision Making CoProD'2025, Oldenburg, Germany, September 22, 2025.
Cyber-intrusions are a big problems for communications. It is therefore desirable to continuously develop new methods for detecting intrusions. A recent paper has shown that in many cases, intrusions can be detected if we form an inclusion graph of all natural groups of computers with similar activities, and find the topology of this graph. Specifically, the appearance of a new cycle is, empirically, a good indication of an intrusion. In this paper, we provide an explanation for this empirical phenomenon.
File UTEP-CS-25-34 in pdf
Published in Proceedings of the 17th International Workshop on Constraint Programming and Decision Making CoProD'2025, Oldenburg, Germany, September 22, 2025.
Chemical reactions are usually described by equations of chemical kinetics, where the reaction rate is proportional to the product of the concentration of the interacting molecules. Such equations have been thoroughly studied. However, in many practical situations, e.g., in biochemistry, chemical reactions are strongly catalyzed, in which case the reaction rate is proportional to the minimum of the concentrations. Such equations are much less studied in chemistry, so it is reasonable to looks for other areas where similar equations appear. One such area is fuzzy techniques, so we hope that fuzzy techniques can help when analyzing such complex chemical and biochemical reactions.
File UTEP-CS-25-33 in pdf
Published in Proceedings of the 17th International Workshop on Constraint Programming and Decision Making CoProD'2025, Oldenburg, Germany, September 22, 2025.
We rarely have precise knowledge about a physical quantity. Often, the only information that we have about a quantity is an interval. To process this information, we need to be able to represent intervals in a computer. For this purpose, we need to represent an interval by numbers. Usually, the most effective and efficient ways to represent an interval are either to represent it by its endpoints or by its midpoint and radius (half-width). This choice has been partly explained by using natural invariance -- with respect to selecting a different measuring unit or a different starting point for measurements. In this paper, we extend this explanation by listing all numerical characteristics of an interval that have such natural invariances, and we list possible applications of our result.
File UTEP-CS-25-32 in pdf
Published in Proceedings of the 17th International Workshop on Constraint Programming and Decision Making CoProD'2025, Oldenburg, Germany, September 22, 2025.
In many practical situations ranging from economics to medicine to geosciences, we use expert estimates of different quantities. To get a better understanding of expert's opinion, it is also reasonable to ask the expert for the perceived accuracy of his/her estimate. As a result, each expert estimate looks like 50 plus minus 10, i.e., it is, in effect, an interval of the type [40 − 10, 40 + 10]. Often, we ask several experts. In this case, we need to combine several resulting subjective intervals into a single interval that we will use to make a decision. In this paper, we describe a natural way to combine subjective intervals.
File UTEP-CS-25-31 in pdf
Published in Proceedings of the 17th International Workshop on Constraint Programming and Decision Making CoProD'2025, Oldenburg, Germany, September 22, 2025.
According to the experimental data, for the sum of two quantities -- each of which is described by a subjective interval -- the most natural interval is described by a Pythagoras-type formula. In this paper, we show that this experimental result can be explained based on decision theory. We then use this explanation to show how we can generalize the empirical formula from addition to general data processing algorithms.
File UTEP-CS-25-30 in pdf
This paper provides theoretical explanations for selecting best machine learning technique for decision making. Mostly, it concentrates on the theoretical explanation of why empirically, ReLU activation functions works the best. There are many possible criteria for selecting such a function: we may look for the fastest-to-compute function, we may look for the function that is the most interpretable, etc. Somewhat unexpectedly, all these criteria lead to the exact same conclusion -- that under some reasonable assumption, ReLU is the best. After describing these results, we explain why all these criteria lead to the same selection. We finish this chapter with an overview of how similar techniques can help with explaining other empirically successful features of deep learning -- and of data processing in general.
File UTEP-CS-25-29 in pdf
The usual softmax formula transforms the degrees to which we are convinced that an object belongs to different classes -- as computed by subnetworks of a neural network -- into the probabilities that this objects belongs to each class. The sum of these probabilities is always 1 -- which means that this formula implicitly assumes that the given object belongs to one of the given classes. In practice, however, there is always a possibility that an object does not belong to any of the given classes. To take this possibility into account, it is desirable to appropriately generalize softmax formula. In this paper, we show that all extensions that satisfy several reasonable conditions form a 1-parametric family; these extensions correspond to adding a constant to the denominator of the softmax formula.
File UTEP-CS-25-28 in pdf
At present, one of the most effective and widely used methods of software testing is a method known as fuzzing. The name of this method come from the same word fuzzy as fuzzy logic and related fuzzy techniques. However, the authors of the fuzzing technique have always emphasized that there is no relation between the two uses of this word. It is indeed true that the invention of fuzzing was not related to fuzzy logic. However, we show, in this paper, that a straightforward use of simple fuzzy ideas naturally leads to fuzzing -- in this sense, these two uses of the word fuzzy are indeed related.
File UTEP-CS-25-27 in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
In this paper, we show that, according to recent empirical studies, there may be some relation between all these phenomena. Specifically, we mention several interesting related numerical similarities.
Original file UTEP-CS-25-26 in pdf
Updated version UTEP-CS-25-26a in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
It is desirable to make AI explainable, i.e., to translate its black-box results into natural-language explanations. A reasonable first step in this translation is to first translate AI results into a natural language. There is already a technique successfully relating natural-language descriptions with precise function -- it is known as fuzzy technique. It is therefore reasonable to use fuzzy technique for this first step towards explainability. There are many different versions of fuzzy techniques, as well as different versions of neural networks. It is therefore important to analyze which versions of fuzzy techniques can, in principle, cover different versions of neural networks. In this paper, we provide an answer to a particular case of this general question: for which activation functions, the functions computed by a neural network can be computed by Takagi-Sugeno systems with constant or linear outputs.
Original file UTEP-CS-25-25 in pdf
Updated version UTEP-CS-25-25a in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
In many applications, it is useful to use fuzzy-motivated techniques of F-transform, they often lead to better results that all previously known methods. However, this empirical success is somewhat puzzling. Indeed, from the purely mathematical viewpoint, F-transform is a particular case of convolution -- so why is it more effective than a general convolution? In this paper, we explain this puzzle by showing that, in contrast to general convolution, F-transform requires much fewer computational steps. This is a big advantage in real-time situations when we need answers as soon as possible. Also, in situations when there is a finite time budget, this allow us to have more time for additional data processing -- and thus, to get better results.
Original file UTEP-CS-25-24 in pdf
Updated version UTEP-CS-25-24a in pdf
In many practical situations -- e.g., in many driving situations -- skilled humans are still performing better than automated control systems. It is therefore desirable to incorporate the knowledge of these skilled human controllers into an automatic systems. This is not easy, since for a significant part of this knowledge, experts cannot describe it in precise computer-understandable terms, they can only describe it by using imprecise ("fuzzy") words from natural language like "small". Techniques for translating such "fuzzy" knowledge into precise computer-understandable form are known as fuzzy techniques. There are many versions of fuzzy techniques, some work better, some perform worse -- and the difference between the best and worst performances is often drastic. At present, the best techniques are often selected by time-consuming trial-and-error. In this paper, we show that in many cases, such a procedure can be sped up if we use a natural requirement that these techniques should not change under appropriate normalizations.
File UTEP-CS-25-23 in pdf
To appear in: Jesus Chamorro and Daniel Sanchez Fernandez (eds.), Celebration of 60 Years of Fuzzy Sets, Reims, France, 2025.
In this paper, we express our view of what were the main challenges that faced fuzzy activities, how some of these challenges were successfully resolved and what is, in our opinion, the main remaining challenges.
File UTEP-CS-25-22 in pdf
To appear in: Abstracts of the Conference on Soft Computing and Intelligent Systems with Applications, Gyor, Hungary, June 5-7, 2025.
We show that several important empirically successful choices in computational intelligence – such as selecting probabilities in evolutionary computation, selecting membership functions and "and"- and "or"-operations in fuzzy logic, and selecting activation function in neural networks -- can be explained by natural symmetry requirements.
File UTEP-CS-25-21 in pdf
To appear in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics IEEE SMC 2025, Vienna, Austria, October 5-8, 2025.
While current machine-learning-based AI techniques have been spectacularly successful, their present applications still leaves many important open questions -- for example, how to make their results more reliable or, at least, how to gauge how reliable is each AI recommendation. In this paper, we argue that to fully answer these questions, we need to go beyond the current AI techniques, and that in this development, systems-, human-, and cybernetics-based ideas not only naturally appear, they seem to provide a way to the desired answers.
Original file UTEP-CS-25-20 in pdf
Updated version UTEP-CS-25-20a in pdf
To appear in: Hien Thi Thu Nguyen, Hai Hong Phan, and Van Nam Huynh (eds.), Data Science in Finance and Accounting, Springer, Cham, Switzerland
In many practical situations, it is necessary to fairly divide the joint gain between the contributors. In the 1950s, the Nobelist Lloyd Shapley showed that under some reasonable conditions, there is only one way to make this division. The resulting Shapley value is now actively used in situations that go beyond economics and finance -- and in which Shapley's conditions are not always satisfied: in machine learning, in systems engineering, etc. In this paper, we explain why Shapley value can be applied to such situations, and how can we generalize Shapley value to make it even more adequate for these new applications.
Original file UTEP-CS-25-19 in pdf
Updated version UTEP-CS-25-19a in pdf
To appear in: Hien Thi Thu Nguyen, Hai Hong Phan, and Van Nam Huynh (eds.), Data Science in Finance and Accounting, Springer, Cham, Switzerland
Normal distributions are ubiquitous, but many actual distributions are different from normal -- for example, they are skewed. To describe such distributions, it is desirable to have a few-parametric family that extends the family of normal distributions. Several such families have been proposed. Empirically, the most effective among them is the family of so-called skew-normal distributions first proposed by A. Azzalini. In particular, this family is effective in econometrics. In this paper, we provide a theoretical explanation for this empirical success. This explanation is similar to an explanation of what ReLU activations functions are most effective in deep learning.
Original file UTEP-CS-25-18 in pdf
Updated version UTEP-CS-25-18a in pdf
File UTEP-CS-25-17 in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
Empirical studies show that a proper sense of belonging -- to the university, to the department, to the profession -- is important for students to succeed. To help students develop this sense, it is desirable to have quantitative models describing how sense of belonging affects student success. In this paper, we show that while we can come up with a reasonable quantitative model by using general model building techniques, we can get a more adequate model if we use fuzzy techniques, techniques that take into account expert's degrees of certainty.
Original file UTEP-CS-25-16 in pdf
Updated version UTEP-CS-25-16a in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
There are two cases in which it has been empirically shown that a convex combination of the interval's endpoints works better than any other combination: processing interval data and dealing with situations in which we know both approximate probability and possibility and we need to make a decision. In this paper, we provide an explanation of both phenomena.
Original file UTEP-CS-25-15 in pdf
Updated version UTEP-CS-25-15a in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
Interval-valued and type-2 fuzzy techniques were designed to provide a more adequate representation of expert knowledge than the traditional (type-1) fuzzy techniques. Somewhat unexpectedly, they also often turn out to be more effective even when there is no expert knowledge at all -- when we are simply using fuzzy rules to fit experimental data. In precise terms, for the same number of parameters, interval-valued and type-2 systems often provide a better fit for the data and/or better quality control than traditional (type-1) fuzzy techniques. In this paper, we provide a theoretical explanation for this surprising phenomenon.
Original file UTEP-CS-25-14 in pdf
Updated version UTEP-CS-25-14a in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
In medicine, many diagnoses are made when, for some value k, at least k of n possible symptoms are present. Many of such symptoms -- such as fever -- are, in reality, fuzzy. For example, it makes no sense that say that 38.0 is fever while 37.9 is not a fever, both are fever to some degree. Once such degrees are given, we need to use them to estimate the degree to which the patient has the corresponding disease. For this problem, the usual fuzzy techniques require exponentially many computational steps -- so it is desirable to have a more efficient algorithm. Such an algorithm was previously proposed for some specific "and"-operation (t-norm). However, in different application areas, different "and"-operation describe the reasoning within this domain. So, it is desirable to extend the existing feasible algorithm to the case of general "and"-operations. In this paper, we describe such an extension.
Original file UTEP-CS-25-13 in pdf
Updated version UTEP-CS-25-13a in pdf
In historically first planes, wings were orthogonal to the fuselage. However, later it turned out that from the aerodynamic viewpoint, it is most efficient to place the wings at about 37 degrees from this orthogonal direction -- and this is where wings are placed in most modern planes. There exist theoretical explanations for this optimality -- explanations based on solving the equations of aerodynamics. In such situations when only a complex not-very-intuitive explanation exists, it is desirable to come up with a simpler more intuitive explanation. For the wing angles, such an explanation is provided in this paper. Namely, we show that, somewhat surprisingly, this is all related to the so-called Egyptian triangle -- a right triangle with sides 3, 4, and 5. The name for this triangle comes from the fact that already the ancient Egyptians were very familiar with this triangle -- they used it to accurately reproduce the right angle.
File UTEP-CS-25-12 in pdf
To appear in: Marek Z. Reformat, Sabrina Senatore, Yusuke Nojima, and Vladik Kreinovich (eds.), "Fuzzy Systems 60 Years Later: Past, Present, and Future. Proceedings of the Joint 2025 World Congress on the International Fuzzy Systems Association and the 2025 Annual Conference of the North American Fuzzy Information Processing Society IFSA-NAFIPS 2025", Banff, Canada, August 16-19, 2025, Springer, Cham, Switzerland.
In fuzzy clustering, we need to have non-linear functions of the membership degrees. Different nonlinear functions have been tried. Empirical evidence shows that for fuzzy clustering, the most effective nonlinear functions are um and u*log(u). In this paper, we provide a theoretical explanation for this empirical fact.
Original file UTEP-CS-25-11 in pdf
Updated version UTEP-CS-25-11a in pdf
A recent paper showed that to make sure that the movements in the crowd are not chaotic, the directions of all the motions should deviate from some fixed direction by no more than 13 degrees. We show that this results provides a new geometric explanation for the seven plus minus two law in psychology, according to which we can keep in mind no more than 7 plus minus 2 items. We also show that all this is related to the somewhat mysterious appearance of 9- and 18-based number systems in Jewish and Mayan traditions.
File UTEP-CS-25-10 in pdf
A recent article in the Notices of the American Mathematical Society reminded the mathematics community that, under the Axiom of Choice, it is possible to have a universal predictor: if we input, into this predictor, the values of a function for all moments t < to for some to, then, for almost all to, this predictor correctly predicts the next values of this function on some interval [to, to + ε). This predictor cannot be used for actual predictions: it is based on the Axiom of Choice and is, therefore, not constructive. A natural question is: maybe it is possible to have another universal predictor, which is constructive? In this paper we show that, unfortunately, it is not possible to have a constructive universal predictor. In other words, the above universal predictor result cannot be used for actual predictions.
File UTEP-CS-25-09 in pdf
To appear in: Guillermo Badia, Manfred Droste, Andreas Blass, and Nachum Dershowitz (eds.), Fields of Logic and Computation IV: Essays dedicated to Yuri Gurevich, Springer, Cham, Switzerland.
Everyone talks about the need for Explainable AI -- when, to supplement a long difficult-to-understand sequence of computational steps leading to AI's decision, we are looking for a shorter and understandable more-informal explanation for this decision. In this paper, we argue that this need is a particular case of what we call Explainable Mathematics -- when we want to supplement a long sequence of arguments and/or computations with a shorter and understandable more-informal explanation. Important instances of Explainable Mathematics are Yuri Gurevich's Quizani dialogs that help explain complex results from theoretical computer science and physicists' more-informal explanations of complex physical phenomena. We explain that in the physics' case, since -- according to most physicists -- all physical theories are approximate, the use of approximate more-informal methods often makes more sense that the use of rigorous methods that implicitly assume that the current theories are absolute correct. We then apply this argument to one of the common uses of physics in theory of computation -- that limitation by the speed of light limits the computation speed. Specifically, we show that quantum space-time ideas potentially allow computations at the micro-level speed of light which can be higher than its usual macro-level value. This potential increase in possible communication speed can speed up computations.
Original file UTEP-CS-25-08 in pdf
Updated version UTEP-CS-25-08a in pdf
Published in: V. I. Slyusar and Y. P. Kondratenko (eds.), AI in Education System: Successful Cases and Perspectives, River Publishers, Aalborg, Denmark, 2025, pp. 113-123.
Many machine learning techniques -- including many techniques behind the current AI-based boom in machine learning -- come from the analysis of successful human learning strategies (and researchers expect that other human learning experiences can lead to even more effective AI-based systems). At this moment, so much experience have been accumulated in AI-based machine learning that it is time to start the analysis in the opposite direction -- to see what can human-based pedagogy learn from AI successes. In this chapter, we provide the first results of such an analysis -- some of which go somewhat against the current pedagogical wisdom.
File UTEP-CS-25-07 in pdf
Published in: V. I. Slyusar and Y. P. Kondratenko (eds.), AI in Education System: Successful Cases and Perspectives, River Publishers, Aalborg, Denmark, 2025, pp. 75-85.
When designing AI-based tools for education, it is important to take into account the experience of human teachers. In this, it is necessary to distinguish between the education features that are justified by the general features of the corresponding education task -- these features should be taken into account in AI-based learning as well -- and features which are specific for traditional non-AI teaching. In this paper, on the important example of bilingual education, we show that several empirically successful teaching strategies can be explained in the general context -- and thus, should be implemented in AI-based teaching as well.
File UTEP-CS-25-06 in pdf
To appear in: Michal Baczynski, Bernard De Baets, Michal Holcapek, Vladik Kreinovich, and Jesus Medina Moreno (eds.), Proceedings of the 13th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT 2025, Riga, Latvia, July 21-25, 2025.
In many practical situations when we process 1-D data, the method of F-transform turned out to be very useful. In this method, we can use either triangular membership functions or more complex ones. Because this method has been so successful in 1-D applications, a natural idea is to extend it to functions defined on 2-D and higher-dimensional domains -- e.g., to images. This method allows natural generalization to rectangular domains, where it indeed turned out to be very effective. A recent paper showed that it can extended to more general domains -- e.g., to triangular domains and to more general domains that are divided into triangular domains by triangulation. Interestingly, while all 1-D membership functions can be extended to the rectangular domains, the current extension to triangular and more general domains was produced only for triangular membership functions. In this paper, we show that this restriction is not accidental: a natural extension of F-transform to triangular domains is only possible for triangular membership functions. This may explain why such membership functions are often very effective.
File UTEP-CS-25-05 in pdf
Updated version UTEP-CS-25-05a in pdf
Most information about the world comes from sensors -- and from the results of processing sensor data. In many practical situations -- e.g., in biomedical applications -- it is desirable to make sure that the sensors are as "invisible" as possible, in particular, that they are as small as possible. One way to achieve such small size is to use ultrathin-layer materials such as graphene. It is known that for such materials, strain causes electromagnetic effects -- which can be used to detect small strains. Interestingly, it turned out that the same equation describes the relation between strain and electric effects and between strain and magnetic effects -- although in these two cases, physics is somewhat different. The fact that we get the same equation in two different physical situations leads to a natural conjecture that this equation should follow from first principles, without the need to use specific physical equations. In this paper, we show that this is indeed the case: one of the main equations of straintronics can be derived from first principles, without using specific equations of physics.
File UTEP-CS-25-04 in pdf
To appear in: Michal Baczynski, Bernard De Baets, Michal Holcapek, Vladik Kreinovich, and Jesus Medina Moreno (eds.), Proceedings of the 13th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT 2025, Riga, Latvia, July 21-25, 2025.
In many practical situations, a group of people needs to share a success. What is the fair way to share this success? Nobelist John Nash showed that under reasonable conditions, the group should select the alternative for which the product of utility gains is the largest possible. This solution makes perfect sense from the fuzzy-formalized commonsense viewpoint: it maximizes the degree of confidence that all participants are happy. A natural question is: can we extend this result to a different class of situations, when a group of people needs to share sacrifices caused by a crisis? In this paper, we prove that in this case, no solution satisfies the same set of conditions. We also explain how to actually fairly distribute needed sacrifices in the case of a crisis.
File UTEP-CS-25-03 in pdf
Updated version UTEP-CS-25-03a in pdf
To appear in: Michal Baczynski, Bernard De Baets, Michal Holcapek, Vladik Kreinovich, and Jesus Medina Moreno (eds.), Proceedings of the 13th Conference of the European Society for Fuzzy Logic and Technology EUSFLAT 2025, Riga, Latvia, July 21-25, 2025.
When making decisions, it is important to take into account high-impact low-probability events. For such events, traditional probability-based approach -- which considers the product of the probability p that this event happens and the probability P that a randomly selected building will be destroyed -- often underestimates risks. Available data has lead to an empirical table that provides a more adequate risk estimate. Most of the entries in this table correspond to the fuzzy-like formula min(p,P). This paper explains this empirical result. Specifically, it explains both the effectiveness of the min formula -- and also explains deviations from this formula.
File UTEP-CS-25-02 in pdf
Updated version UTEP-CS-25-02b in pdf
To appear in Proceedings of the IEEE International Conference on Fuzzy Systems, Reims, France, July 609, 2025.
Large Language Models (LLMs) like ChatGPT have spectacular successes -- but they also have surprising failures that an average person with common sense could easily avoid. It is therefore desirable to incorporate the imprecise ("fuzzy") common sense into LLMs. A natural question is: to what extent will this help? This way, we may avoid a few simple mistakes, but will it significantly improve the LLMs' performance? What portion of the gap between current LLMs and ideal perfect AI-based agents can be, in principle, covered by using fuzzy techniques? Judging by the fact that few researchers working on LLMs (and on deep learning in general) try fuzzy methods shows that most these researchers do not believe that the use of fuzzy techniques will significantly improve LLMs' performance. Contrary to this pessimistic viewpoint, our analysis shows that potentially, fuzzy techniques can cover all the above gap -- or at least a significant portion of this gap. In this sense, indeed, all LLMs need to become perfect is fuzzy techniques.
Original file UTEP-CS-25-01 in pdf
Updated version UTEP-CS-25-01b in pdf