In this article, we briefly describe what we learned from our teacher and friend Lotfi Zadeh -- and what others can learn from our experience.
File UTEP-CS-26-12 in pdf
Many physical phenomena are described by complex systems of partial different equations. Even on high-performance parallel computers, numerical methods for solving these equations often require a large amount of computation time. Lately, AI techniques -- namely, neural networks (NNs) -- have been successfully used to speed up these computations: a NN is trained on several examples and, once trained, generates solutions for new initial and/or boundary conditions. These computations are faster, but still require a lot of computation resources: time, memory, and energy. It is known that in AI computations, we can often use fewer resources by using limited precision when computing and processing the weights of a NN. To utilize this idea, we need to be able to find out how bounded precision in computing weights affects the AI computation results, and -- ideally -- what is the optimal allocation of weight precisions. In this paper, we describe general algorithms for solving these two problems. As usual with algorithms, a lot of additional work is needed to make these algorithms practically efficient and easy to use -- but what we show is that this is all doable.
File UTEP-CS-26-11 in pdf
Complex computations often contain difficult-to-detect mistakes. This problem is very acute for AI-based computations -- where 5% of the corresponding answers are wrong, but this happens in more traditional computations as well. A natural way to detect such mistakes is to supplement the actual computation results with some easy-to-understand explanations. This is what researchers are trying to do for AI for make its results more reliable, and this is what we propose to do for computations in general. In this paper, we illustrate this idea on the example of reliable engineering computing, where it is important not only to get an estimate, but to also inform the user how accurate is the provided estimate.
File UTEP-CS-26-10 in pdf
It was recently proven that any countable partially ordered set with no maximal elements can be divided into disjoint omega-chains, i.e., ordered subsets isomorphic to the set of natural numbers. In this paper, we prove that this division can be done algorithmically. We also discuss possible applications to the analysis of multiple personality disorder.
File UTEP-CS-26-09 in pdf
It is known that the famous Catalan architect Antoni Gaudi had arches in many of his buildings. A recent book has shown that practically all his arches have one of the following two shapes; they are either parabolic arches, in which the y-coordinate is a quadratic function of x, or so-called catenary arches. In this paper, we provide a possible mathematical explanation of why Gaudi used only these two types of arches.
File UTEP-CS-26-08 in pdf
A recent paper has shown that dolphins living in small social groups age slower than solitary dolphins, but dolphins living in larger social groups age faster than solitary dolphins. That paper provided explanations based, to some extent, on the specifics of the social life, with its mutual help and -- at the same time -- stressful conflicts. The current paper intends to provide a more general explanation of the newly observed phenomena, an explanation based on the ideas of the general decision theory.
File UTEP-CS-26-07 in pdf
While convolutional neural networks (CNNs) are very effective in image processing, they are not robust: a minor change in a few pixels can drastically change the image processing result -- and thus, to a misclassification of the image. A recent paper has shown that CNNs can be made more robust if instead of the usual max-neurons that return the largest of the inputs, we use neurons that return the second largest of the inputs. Such neurons are known as drop-max neurons. In this paper, we prove that a natural robustness requirement uniquely determines the use of drop-max neurons. We also describe what type of neurons we should use if we want to achieve a stronger robustness.
File UTEP-CS-26-06 in pdf
One of the main direction in modern pedagogy is constructivism, when instead of explicitly teaching general rules and algorithms, the instructor provides the students with a well-design sequence of examples, based on which the students can easily reconstruct the general rules. This direction has been very successful -- and its success seems to be confirmed by spectacular successes of modern AI, successes based on a similar idea -- that teaching computer examples from which the computer can reconstruct the rules is much more productive than explicitly teaching the rules. However, our experience of teaching complex rules and algorithms shows that sometimes, teaching rules first leads to better results, In this paper, we show that several recent machine learning results show a similar tendency -- that for complex rules and algorithms, it is sometimes beneficial to explicitly teach computer the rules.
File UTEP-CS-26-05 in pdf
In many practical situations - in particular, in many medical problems - it is important to find the coefficients of linear regression based on the empirical data. In many such situations, we only know the upper bound on the absolute value of the measurement error - i.e., in effect, we only know intervals containing the actual values. When we know that the dependence is exactly linear, finding the exact ranges of possible values of the regression coefficients is NP-hard -- meaning that, in general (unless P = NP), the exact computation of these ranges is not practically feasible. However, in many practical cases - in particular, in many medical applications - linear regression is only an approximate model, obtained by ignoring quadratic and higher order terms. In such cases, it is reasonable to also ignore quadratic order terms in our estimation of the ranges of regression coefficients. We show that this natural idea enables us to design efficient algorithms for estimating these ranges. Specifically, we present a polynomial-time algorithm.
File UTEP-CS-26-04 in pdf
To appear in Proceedings of the 2026 Annual Conference of North American Fuzzy Information Processing Society NAFIPS 2026, El Paso, Texas, March 14-16, 2026.
The reason why we have seasons is that the Earth's rotation axis is tilted. An interesting fact that the sine of the tilt is almost exactly 2/5. This fact leads to a natural question: is this an indication of a physical resonance -- or is this a random coincidence? In this paper, we show that this is an accidental coincidence.
File UTEP-CS-26-03 in pdf
To appear in Proceedings of the 2026 Annual Conference of North American Fuzzy Information Processing Society NAFIPS 2026, El Paso, Texas, March 14-16, 2026.
In a recent book, two veteran Air Force leaders provide general advice on how to deal with real-life challenges. In this paper, we summarize this advice in precise terms, and explain that this advice fits with common sense.
File UTEP-CS-26-02 in pdf
To appear in Proceedings of the 2026 Annual Conference of North American Fuzzy Information Processing Society NAFIPS 2026, El Paso, Texas, March 14-16, 2026.
Many Gulag memoirs mention that to avoid starvation, smart team leaders "cheated" -- fictitiously redistributed the overall production between team members, as a result of which the overall award increased. This practice leads to a natural question: which award system prevents such cheating? In which award system such a fictitious redistribution will not change the overall team award? In this paper, we show that the only award system that prevents such cheating is linear, when the award is a linear function of productivity.
File UTEP-CS-26-01 in pdf