Computer Science Department

Abstracts of 2018 Reports

Technical Report UTEP-CS-18-91, December 2018

Updated version UTEP-CS-91a, January 2019

How Accurately Can We Determine the Coefficients: Case of Interval Uncertainty

Michal Cerny and Vladik Kreinovich

Published in *Proceedings of the 12th International Workshop
on Constraint Programming and Decision Making CoProd'2019,
Part of the World Congress of the International Fuzzy
Systems Association and the Annual Conference of the North
American Fuzzy Information Processing Society IFSA/NAFIPS'2019*,
Lafayette, Louisiana, June 17, 2019, pp. 779-787.

In many practical situations, we need to estimate the parameters of a linear (or more general) dependence based on measurement results. To do that, it is useful, before we start the actual measurements, to estimate how accurately we can, in principle, determine the desired coefficients: if the resulting accuracy is not sufficient, then should not waste time trying and resources and instead, we should invest in more accurate measuring instruments. This is the problem that we analyze in this paper.

Original file UTEP-CS-18-91 in pdf

Updated version UTEP-CS-18-91a in pdf

Technical Report UTEP-CS-18-90, December 2018

Can We Improve the Standard Algorithm of Interval Computation by Taking Almost Monotonicity into Account?

Martine Ceverio, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 12th International Workshop
on Constraint Programming and Decision Making CoProd'2019,
Part of the World Congress of the International Fuzzy
Systems Association and the Annual Conference of the North
American Fuzzy Information Processing Society IFSA/NAFIPS'2019*,
Lafayette, Louisiana, June 17, 2019, pp. 767-778.

In many practical situations, it is necessary to perform interval computations -- i.e., to find the range of a given function y = f(x1,...,xn) on given intervals -- e.g., when we want to find guaranteed bounds of a quantity that is computed based on measurements, and for these measurements, we only have upper bounds of the measurement error. The standard algorithm for interval computations first checks for monotonicity. However, when the function $f$ is almost monotonic, this algorithm does not utilize this fact. In this paper, we show that such closeness-to-monotonicity can be efficiently utilized.

Technical Report UTEP-CS-18-89, December 2018

Fuzzy Approach to Optimal Placement of Health Centers

Juan Carlos Figueroa Garcia, Carlos Franco, and Vladik Kreinovich

Published in *Proceedings of the 12th International Workshop
on Constraint Programming and Decision Making CoProd'2019,
Part of the World Congress of the International Fuzzy
Systems Association and the Annual Conference of the North
American Fuzzy Information Processing Society IFSA/NAFIPS'2019*,
Lafayette, Louisiana, June 17, 2019, pp. 793-799.

In countries with socialized medicine, it is important to decide how to distribute limited medical resources -- in particular, where to place health centers. In this paper, we formulate and solve the corresponding constraint optimization problem. Once the locations are selected, it is necessary to decide which regions are served by each center. Traditionally, this decision is crisp, in the sense that each location is assigned to a single health center. We show that the medical service can be made more efficient if we allow fuzzy assignments, when some locations can be potentially served by two neighboring health centers.

Technical Report UTEP-CS-18-88, December 2018

Updated version UTEP-CS-18-88a, April 2019

Between Dog and Wolf: A Continuous Transition from Fuzzy to Probabilistic Estimates

Martine Ceberio, Olga Kosheleva, Vladik Kreinovich, and Luc Longpre

Published in *Proceedings of the International IEEE Conference on
Fuzzy Systems FUZZ-IEEE'2019*, New Orleans, Louisiana, June
23-26, 2019, pp. 906-910.

Often, we use original expert estimates to compute estimates of related quantities. In many practical situations, it is desirable to know how accurate is the resulting estimate. There are many techniques for computing this accuracy: we can use simple probabilistic ideas and we can use simple fuzzy ideas. Strangely enough, these two reasonable techniques lead to drastically different results. Which of them is correct? Our practical tests show that none of these two methods is perfect: probabilistic approach usually underestimates uncertainty, while the fuzzy approach overestimates it. This looks similar to many cases that motivated Zadeh to promote the idea of soft computing -- a combination of different uncertainty techniques. To get a more adequate combination technique, we analyzed the general problem of combining accuracy estimates and came up with a 1-parametric family of techniques that contains probabilistic and fuzzy as particular cases -- and that indeed works better on several practical examples that each of the original two techniques.

Original file UTEP-CS-18-88 in pdf

Updated version UTEP-CS-18-88a in pdf

Technical Report UTEP-CS-18-87, December 2018

Updated version UTEP-CS-18-87a, April 2019

In Its Usual Formulation, Fuzzy Computation Is, In General, NP-Hard, But a More Realistic Formulation Can Make It Feasible

Martine Ceberio, Olga Kosheleva, Vladik Kreinovich, and Luc Longpre

Published in *Proceedings of the International IEEE Conference on
Fuzzy Systems FUZZ-IEEE'2019*, New Orleans, Louisiana, June
23-26, 2019, pp. 412-417.

The need for most computations comes from the fact that in many practical situations, we cannot directly measure or estimate the desired quantity y -- e.g., we cannot directly measure the distance to a star or the next week's temperature. To provide the desired estimate, we measure or estimate easier-to-measure quantities x1, ..., xn which are related to y, and then use the known relation y = f(x1,...,xn) to transform our estimates Xi for xi into an estimate Y=f(X1,...,Xn) for y. In situations when xi are known with fuzzy uncertainty, we thus need fuzzy computation. Zadeh's extension principle provides us with formulas for fuzzy computation. The challenge is that the resulting computational problem is NP-hard -- which means that, unless P=NP (which most computer scientists consider to be impossible), it is not possible to solve all fuzzy computation problems in feasible time. To overcome this challenge, we propose a more realistic formalization of fuzzy computation -- in which instead of an un-realistic requirement that the corresponding properties hold for all xi, we only require that they hold for almost all xi -- in some reasonable sense. We show that under this modification, the problem of fuzzy computation becomes computationally feasible.

Original file UTEP-CS-18-87 in pdf

Updated version UTEP-CS-18-87a in pdf

Technical Report UTEP-CS-18-86, November 2018

Computing with Words -- When Results Do Not Depend on the Selection of the Membership Function

Christopher W. Tovar, Carlos Cervantes, Mario Delgado, Stephanie Figueroa, Caleb Gillis, Daniel Gomez, Andres Llausas, Julio C. Lopez Molinar, Mariana Rodriguez, Alexander Wieczkowski, Francisco Zapata, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2019, Vol. 13,
No. 2, pp. 133-137.

Often, we need to transform natural-language expert knowledge into computer-understandable numerical form. One of the most successful ways to do it is to use fuzzy logic and membership functions. The problem is that membership functions are subjective. It is therefore desirable to look for cases when the results do not depend on this subjective choice. In this paper, after describing a known example of such a situation, we list several other examples where the results do not depend on the subjective choice of a membership function.

Technical Report UTEP-CS-18-85, November 2018

Secure Multi-Agent Quantum Communication: Towards the Most Efficient Scheme (A Pedagogical Remark)

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 49, pp. 119-125.

In many economic and financial applications, it is important to have secure communications. At present, communication security is provided mostly by RSA coding, but the emergent quantum computing can break this encoding, thus making it not secure. One way to make communications absolutely secure is to use quantum encryption. The existing schemes for quantum encryption are aimed at agent-to-agent communications; however, in practice, we often need secure multi-agent communications, where each of the agents has the ability to securely send messages to everyone else. In principle, we can repeat the agent-to-agent scheme for each pair of agents, but this requires a large number of complex preliminary quantum communications. In this paper, we show how to minimize the number of such preliminary communications -- without sacrificing reliability of the all-pairs scheme.

Technical Report UTEP-CS-18-84, November 2018

Detecting At-Risk Students: Empirical Results and Their Theoretical Explanation

Edgar Daniel Rodriguez Velasquez, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 49, pp. 73-79.

In teaching, it is very important to identify, as early as possible, students who may be at risk of failure. Traditionally, two natural criteria are used for this identification: poor grades in previous classes, and poor grades on the first assignments in the current class. Our empirical results show that these criteria do not always work: sometimes a students deemed at-risk by one of these criteria consistently succeeds, and sometimes a student who is not considered at-risk frequently fails. In this paper, we provide a theoretical explanation of our quantitative empirical results, and we use these results to provide recommendations on how to better detect at-risk students.

Technical Report UTEP-CS-18-83, November 2018

Why Early Galaxies Were Pickle-Shaped: A Geometric Explanation

Olga Kosheleva and Vladik Kreinovich

The vast majority of currently observed geometric shapes of
celestial bodies can be explained by a simple symmetry idea: the
initial distribution of matter is invariant with respect to
shifts, rotations, and scaling, but this distribution is unstable,
so we have spontaneous symmetry breaking. According to statistical
physics, among all possible transitions, the most probable are the
ones that retain the largest number of symmetries. This explains
the currently observed shapes and -- on the qualitative level --
their relative frequency. According to this idea, the most
probable first transition is into a planar (*pancake*) shape,
then into a logarithmic spiral, and other shapes like a straight
line fragment (*pickle*) are less probable. This is exactly
what we have observed until recently, but recent observations have
shown that, in contrast to the currently observed galaxies, early
galaxies are mostly pickle-shaped. In this paper, we provide a
possible geometric explanation for this phenomenon: namely,
according to modern physics, the proper space was originally more
than 3-dimensional; later, the additional dimensions compactified
and thus, became not directly observable. For galaxies formed at
the time when the overall spatial dimension was 5 or larger, the
pickle shape is indeed more symmetric than the planar shape -- and
should, therefore, be prevailing -- exactly as what we observe.

Technical Report UTEP-CS-18-82, November 2018

Revised version UTEP-CS-18-82b, December 2019

Revised version UTEP-CS-18-82c, February 2020

Relativistic Effects Can Be Used to Achieve a Universal Square-Root (Or Even Faster) Computation Speedup

Olga Kosheleva and Vladik Kreinovich

To appear in: Andreas Blass, Patrick Cegielsky, Nachum
Dershowitz, Manfred Droste, and Bernd Finkbeiner (eds.), *Fields of
Logic and Computation III*, Springer.

In this paper, we show that special relativity phenomenon can be used to reduce computation time of any algorithm from T to square root of T. For this purpose, we keep computers where they are, but the whole civilization starts moving around the computer -- at an increasing speed, reaching speeds close to the speed of light. A similar square-root speedup can be achieved if we place ourselves near a growing black hole. Combining the two schemes can lead to an even faster speedup: from time T to the 4-th order root of T.

Original file UTEP-CS-18-82 in pdf

Revised version UTEP-CS-18-82b in pdf

Revised version UTEP-CS-18-82c in pdf

Technical Report UTEP-CS-18-81, November 2018

Translating Discrete Estimates into a Less Detailed Scale: An Optimal Approach

Thongchai Dumrongpokaphan, Olga Kosheleva, and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2019, Special
issue Structural Change Modeling and Optimization in Econometrics
2018, pp. 41-55.

In many practical situations, we use estimates that experts make on a 0-to-n scale. For example, to estimate the quality of a lecturer, we ask each student to evaluate this quality by selecting an integer from 0 to n. Each such estimate may be subjective; so, to increase the estimates' reliability, it is desirable to combine several estimates of the corresponding quality. Sometimes, different estimators use slightly different scales: e.g., one estimator uses a scale from 0 to n+1, and another estimator uses a scale from 0 to n. In such situations, it is desirable to translate these estimates to the same scale, i.e., to translate the first estimator's estimates into the 0-to-n scale. There are many possible translations of this type. In this paper, we find a translation which is optimal under a reasonable optimality criterion.

Technical Report UTEP-CS-18-80, November 2018

Bhutan Landscape Anomaly: Possible Effect on Himalayan Economy (In View of Optimal Description of Elevation Profiles)

Thach Ngoc Nguyen, Laxman Bokati, Aaron Velasco, and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2019, Special
issue Structural Change Modeling and Optimization in Econometrics
2018, pp. 57-69.

Economies of countries located in seismic zones are strongly effected by this seismicity. If we underestimate the seismic activity, then a reasonably routine earthquake can severely damage the existing structures and thus, lead to huge economic losses. On the other hand, if we overestimate the seismic activity, we waste a lot of resources on unnecessarily fortifying all the buildings -- and this too harms the economies. From this viewpoint, it is desirable to have estimations of regional seismic activities which are as accurate as possible. Current predictions are mostly based on the standard geophysical understanding of earthquakes as being largely caused by the movement of tectonic plates and terranes. This understanding works in most areas, but in Bhutan area of the Himalayas region, there seems to be a landscape anomaly. As a result, for this region, we have less confidence in the accuracy of seismic predictions based on the standard understanding and thus, have to use higher seismic thresholds in construction. In this paper, we find the optimal description of landscape-describing elevation profiles, and we use this description to show that the seeming anomaly is actually in perfect agreement with the standard understanding of the seismic activity. Our conclusion is that it is safe to apply, in this region, estimates based on the standard understanding and thus, avoid unnecessary expenses caused by an increased threshold.

Technical Report UTEP-CS-18-79, November 2018

Towards Optimal Implementation of Decentralized Currencies: How to Best Select Probabilities in an Ethereum-Type Proof-of-Stake Protocol

Thach Ngoc Nguyen, Christian Servin, and Vladik Kreinovich

Published in *Thai Journal of Mathematics*, 2019, Special
issue Structural Change Modeling and Optimization in Econometrics
2018, pp. 71-76.

Nowadays, most financial transactions are based on a centralized system, when all the transaction records are stored in a central location. This centralization makes the financial system vulnerable to cyber-attacks. A natural way to make the financial system more robust and less vulnerable is to switch to decentralized currencies. Such a transition will also make financial system more transparent. Historically first currency of this type -- bitcoin -- use a large amount of electric energy to mine new coins and is, thus, not scalable to the level of financial system as a whole. A more realistic and less energy-consuming scheme is provided by proof-of-stake currencies, where the right to mint a new coin is assigned to a randomly selected user, with probability depending of the user's stake (e.g., his/her number of coins). What probabilities should we choose? In this paper, we find the probability selection that provides the optimal result -- optimal in the sense that it is the least inductive to cheating.

Technical Report UTEP-CS-18-78, November 2018

Symmetries Are Important

Vladik Kreinovich

Published in: Olga Kosheleva, Sergey Shary, Gang Xiang, and Roman
Zapatrin (eds.), *Beyond Traditional Probabilistic Data
Processing Techniques: Interval, Fuzzy, etc. Methods and Their
Applications*, Springer, Cham, Switzerland, 2020, pp. 1-5.

This short article explains why symmetries are important, and how they influenced many research projects in which I participated.

Technical Report UTEP-CS-18-77, November 2018

Should School Feel Like a Family: Lessons from Business Controversy as Interpreted by Decision Making Theory

Olga Kosheleva, Julian Viera, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 50, pp. 112-116.

Traditional business theory promoted the ideal of business as a family: everyone should feel good about each other, all employees should feel good working together towards a joint goal. Recently, however, researchers claim that the well-promoted ideal is unattainable, it is a ruse causing everyone to overwork. Instead, these researchers propose a non-emotional collaboration of adults working temporarily on a joint project. In this paper, we show that this new trend is not just based on anecdotal evidence, it actually has a solid foundation in decision theory. So maybe we should apply this new trend to teaching too - and place less emphasis on the need for everyone to become friends and for team-building? Maybe -- like the new business trend suggests -- we should reserve our feeling for our real families?

Technical Report UTEP-CS-18-76, November 2018

From Gig Economy to Gig Education

Olga Kosheleva, Julian Viera, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 50, pp. 107-111.

Modern economy has benefited from gig economy idea, where, instead of hiring permanent employees, a company assigns each task to the person who is the most efficient in performing this task. This way, each task is performed in the best possible way -- by a person who is the most suited for this job. Why not extend this idea to education? Every student deserves the best possible teacher in every topic. So why not have a teacher who is the best in town in explaining quadratic equations teach quadratic equations to all the students from the town? In this paper, we describe this proposal and its logistics in some detail.

Technical Report UTEP-CS-18-75, October 2018

Psychological Behavior of English Learners Utilizing a Cognitive Tutor in an Online Pre-Calculus

Julian Viera Jr., Olga Kosheleva, and Vladik Kreinovich

The educational landscape is becoming a digital learning environment. Students in today's digital world draw from multiple sources of information; from hypertext, videos, social media, to video games and internet searches. English Learners, individuals learning two languages at once, who use software written in English have a passive relationship with the computer when software is not in their native language. They feel that this educational software belongs to another culture. This paper will present findings from a study with English Learners' engagement in a fully online pre-calculus course. The authors utilized Cultural-Historical Activity Theory to describe how English Learners' created authentic bilingual learning environments and improved their self-efficacy for mathematics.

Technical Report UTEP-CS-18-74, October 2018

Updated version UTEP-CS-18-74a, December 2018

Probability-Based Approach Explains (and Even Improves) Heuristic Formulas of Defuzzification

Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in: Hirosato Seki, Canh Hao Nguyen, Van-Nam Huynh, and
Masahiro Inuiguchi (eds.), *Integrated Uncertainty in Knowledge
Modelling and Decision Making: 7th International Symposium
IUKM'2019, Nara, Japan, March 27-29, 2019*,
Springer Lecture Notes in Artificial Intelligence, 2019, Vol. 11471,
pp. 98-108.

Fuzzy techniques have been successfully used in many applications. However, often, formulas for processing fuzzy information are often heuristic: they lack a convincing justification, and thus, users are sometimes reluctant to use them. In this paper, we show that we can justify (and sometimes even improve) these methods if we use a probability-based approach.

Original file UTEP-CS-18-74 in pdf

Updated version UTEP-CS-18-74a in pdf

Technical Report UTEP-CS-18-73, October 2018

How to Describe Correlation in the Interval Case?

Carlos Jimenez, Francisco Zapata, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2019, Vol. 13,
No. 2, pp. 109-113.

In many areas of science and engineering, we want to change a difficult-to-directly-change quantity -- e.g., the economy's growth rate. Since we cannot directly change the desired quantity, we need to find easier-to-change auxiliary quantities that are correlated with the desired quantity -- in the sense that a change in the auxiliary quantity will cause the change in the desired quantity as well. How can we describe this intuitive notion of correlation in precise terms? The traditional notion of correlation comes from situations in which there are many independent factors causing the predictive model to differ from the actual values and all these factors are of about the same size. In this case, the distribution of the difference between the model's predictions and the actual values is close to normal. In many practical situations, however, there are a few major factors which are much larger than others. In this case, the distribution of the differences is not necessarily normal. In this paper, we show how, in such situations, we can formalize the intuitive notion of correlation.

Technical Report UTEP-CS-18-72, October 2018

Experimental Determination of Mechanical Properties Is, In General, NP-Hard -- Unless We Measure Everything

Yan Wang, Oscar Galindo, Michael Baca, Jake Lasley, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2019, Vol. 13,
No. 2, pp. 147-150.

When forces are applied to different parts of a construction, they
cause displacements. In practice, displacements are usually
reasonably small. In this case, we can safely ignore quadratic and
higher order terms in the corresponding dependence and assume that
the forces linear depend on displacements. The coefficients of
this linear dependence determine the mechanical properties of the
construction and thus, need to be experimentally determined. In
the ideal case, when we measure the forces and displacements at
all possible locations, it is easy to find the corresponding
coefficients: it is sufficient to solve the corresponding system
of linear equations. In practice, however, we only measure
displacements and forces at *some* locations. We show that in
this case, the problem of determining the corresponding
coefficients becomes, in general, NP-hard.

Technical Report UTEP-CS-18-71, October 2018

All Maximally Complex Problems Allow Simplifying Divide-and-Conquer Approach: Intuitive Explanation of a Somewhat Counterintuitive Ladner's Result

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 48, pp. 53-60.

Ladner's 1975 result says that any NP-complete problem -- i.e., in effect, any maximally complex problem -- can be reduced to solving two easier problems. This result sounds counter-intuitive: if a problem is maximally complex, how can it be reduced to simpler ones? In this paper, we provide an intuitive explanation for this result. Our main argument is that since complexity and easiness-to-divide are not perfectly correlated, it is natural to expect that maximally complex problem is not maximally difficult to divide. Our related argument is that -- as this result shows -- NP-completeness is a sufficient but not a necessary condition for a problem to be maximally complex; how to come up with a more adequate notion of complexity is still an open problem.

Technical Report UTEP-CS-18-70, October 2018

In the Discrete Case, Averaging Cannot Be Consistent

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 48, pp. 46-52.

When we have two estimates of the same quantity, it is desirable
to combine them into a single more accurate estimate. In the usual
case of continuous quantities, a natural idea is to take the
arithmetic average of the two estimates. If we have four
estimates, then we can divide them into two pairs, average each
pair, and then average the resulting averages. Arithmetic average
is *consistent* in the sense that the result does not depend
on how we divide the original four estimates into two pairs. For
discrete quantities -- e.g., quantities described by integers --
the arithmetic average of two integers is not always an integer.
In this case, we need to select one of the two integers closest to
the average. In this paper, we show that no matter how we select
-- even if we allow probabilistic selection -- the resulting
averaging cannot be always consistent.

Technical Report UTEP-CS-18-69, October 2018

Preferences (Partial Pre-Orders) on Complex Numbers -- in View of Possible Use in Quantum Econometrics

Songsak Sriboonchitta, Vladik Kreinovich, and Olga Kosheleva

Published in *Thai Journal of Mathematics*, 2019, Special
issue Structural Change Modeling and Optimization in Econometrics
2018, pp. 33-39.

In economic application, it is desirable to find an optimal solution -- i.e., a solution which is preferable to any other possible solution. Traditionally, the state of an economic system has been described by real-valued quantities such as profit, unemployment level, etc. For such quantities, preferences correspond to natural order between real numbers: all things being equal, the more profit the better, and the smaller unemployment, the better. Lately, it turned out that to adequately describe economic phenomena, it is often convenient to use complex numbers. From this viewpoint, a natural question is: what are possible orders on complex numbers? In this paper, we show that the only possible orders are orders on real numbers.

Technical Report UTEP-CS-18-68, October 2018

Comparing US and Russian Grading Scales

Olga Kosheleva and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2018, Vol. 5, No. 1, pp. 15-20.

In the US, grades are usually based on comprehensive written exams: the larger the proportion of topic in which the student shows knowledge, the higher the student's grade. In contrast, in Russia, the grades are based on oral exams, and the bulk of the grade comes from a student answering questions of a few (usually, three) randomly selected topics. A natural question is: what is the relation between the two grading scales? It turns out that "excellent" and "good" grades means the same in both scales, while the US "satisfactory" level is higher than a similar Russian level.

Technical Report UTEP-CS-18-67, September 2018

Need to Combine Interval and Probabilistic Uncertainty: What Needs to Be Computed, What Can Be Computed, What Can Be Feasibly Computed, and How Physics Can Help

Songsak Sriboonchitta, Thach Ngoc Nguyen, Vladik Kreinovich, and Hung T. Nguyen

In many practical situations, the quantity of interest is difficult to measure directly. In such situations, to estimate this quantity, we measure easier-to-measure quantities which are related to the desired one by a known relation, and we use the results of these measurement to estimate the desired quantity. How accurate is this estimate?

Traditional engineering approach assumes that we know the probability distributions of measurement errors; however, in practice, we often only have partial information about these distributions. In some cases, we only know the upper bounds on the measurement errors; in such cases, the only thing we know about the actual value of each measured quantity is that it is somewhere in the corresponding interval. Interval computation estimates the range of possible values of the desired quantity under such interval uncertainty.

In other situations, in addition to the intervals, we also have partial information about the probabilities. In this paper, we describe how to solve this problem in the linearized case, what is computable and what is feasibly computable in the general case, and, somewhat surprisingly, how physics ideas -- that initial conditions are not abnormal, that every theory is only approximate -- can help with the corresponding computations.

Technical Report UTEP-CS-18-66, September 2018

Updated version UTEP-CS-18-66a, December 2018

Towards Parallel Quantum Computing: Standard Quantum Teleportation Algorithm Is, in Some Reasonable Sense, Unique

Oscar Galindo, Olga Kosheleva, and Vladik Kreinovich

Published in Hirosato Seki, Canh Hao Nguyen, Van-Nam Huynh, and
Masahiro Inuiguchi (eds.), *USB Proceedings of the 7th
International Symposium on Integrated Uncertainty in Knowledge
Modelling and Decision Making IUKM'2019*, Nara, Japan, March
27-29, 2019, pp. 1-10.

In many practical problems, the computation speed of modern computers is not sufficient. Due to the fact that all speeds are bounded by the speed of light, the only way to speed up computations is to further decrease the size of the memory and processing cells that form a computational device. At the resulting size level, each cell will consist of a few atoms -- thus, we need to take quantum effects into account. For traditional computational devices, quantum effects are largely a distracting noise, but new quantum computing algorithms have been developed that use quantum effects to speed up computations. In some problems, however, this expected speed-up may not be sufficient. To achieve further speed-up, we need to parallelize quantum computing. For this, we need to be able to transmit a quantum state from the location of one processor to the location of another one; in quantum computing, this process is known as teleportation. A teleportation algorithm is known, but it is not clear how efficient it is: maybe there are other more efficient algorithms for teleportation? In this paper, we show that the existing teleportation algorithm is, in some reasonable sense, unique -- and thus, optimal.

Original file UTEP-CS-18-66 in pdf

Updated version UTEP-CS-18-66a in pdf

Technical Report UTEP-CS-18-65, September 2018

Updated version UTEP-CS-18-65a, December 2018

Why Max and Average Poolings are Optimal in Convolutional Neural Networks

Ahnaf Farhan, Olga Kosheleva, and Vladik Kreinovich

Published in Hirosato Seki, Canh Hao Nguyen, Van-Nam Huynh, and
Masahiro Inuiguchi (eds.), *USB Proceedings of the 7th
International Symposium on Integrated Uncertainty in Knowledge
Modelling and Decision Making IUKM'2019*, Nara, Japan, March
27-29, 2019, pp. 23-34.

In many practical situations, we do not know the exact relation between different quantities; this relation needs to be determined based on the empirical data. This determination is not easy -- especially in the presence of different types of uncertainty. When the data comes in the form of time series and images, many efficient techniques for such determination use algorithms for training convolutional neural network. As part of this training, such networks "pool" several values corresponding to nearby temporal or spatial points into a single value. Empirically, the most efficient pooling algorithm consists of taking the maximum of the pooled values; the next optimal is taking the arithmetic mean. In this paper, we provide a theoretical explanation for this empirical optimality.

Original file UTEP-CS-18-65 in pdf

Updated version UTEP-CS-18-65a in pdf

Technical Report UTEP-CS-18-64, September 2018

A Symmetry-Based Explanation of the Main Idea Behind Chubanov's Linear Programming Algorithm

Olga Kosheleva, Vladik Kreinovich, and Thongchai Dumrongpokaphan

Published in: Hung T. Nguyen and Vladik Kreinovich (eds.),
*Algebraic Techniques and Their Use in Describing and Processing
Uncertainty*, Springer, Cham, Switzerland, 2020, pp. 55-64.

Many important real-life optimization problems can be described as optimizing a linear objective function under linear constraints -- i.e., as a linear programming problem. This problem is known to be not easy to solve. Reasonably natural algorithms -- such as iterative constraint satisfaction or simplex method -- often require exponential time. There exist efficient polynomial-time algorithms, but these algorithms are complicated and not very intuitive. Also, in contrast to many practical problems which can be computed faster by using parallel computers, linear programming has been proven to be the most difficult to parallelize. Recently, Sergei Chubanov proposed a modification of the iterative constraint satisfaction algorithm: namely, instead of using the original constraints, he proposed to come up with appropriate derivative constraints. Interestingly, this idea leads to a new polynomial-time algorithm for linear programming -- and to efficient algorithms for many other constraint satisfaction problems. In this paper, we show that an algebraic approach -- namely, the analysis of the corresponding symmetries -- can (at least partially) explain the empirical success of Chubanov's idea.

Technical Report UTEP-CS-18-63, August 2018

Why Bohmian Approach to Quantum Econometrics: An Algebraic Explanation

Vladik Kreinovich, Olga Kosheleva, and Songsak Sriboonchitta

Published in: Hung T. Nguyen and Vladik Kreinovich (eds.),
*Algebraic Techniques and Their Use in Describing and Processing
Uncertainty*, Springer, Cham, Switzerland, 2020, pp. 65-75.

Many equations in economics and finance are very
complex. As a result, existing methods of solving these equations
are very complicated and time-consuming. In many practical
situations, more efficient algorithms for solving new complex
equations appear when it turns out that these equations can be
reduced to equations from other application areas -- equations for
which more efficient algorithms are already known. It turns out
that some equations in economics and finance can be reduced to
equations from physics -- namely, from quantum physics. The
resulting approach for solving economic equations is known as
*quantum econometrics*. In quantum physics, the main objects are
described by complex numbers; so, to have a reduction, we need to
come up with an economic interpretation of these complex numbers.
It turns out that in many cases, the most efficient interpretation
comes when we separately interpret the absolute value (modulus)
and the phase of each corresponding quantum number; the resulting
techniques are known as *Bohmian econometrics*. In this paper,
we use an algebraic approach -- namely, the idea of invariance and
symmetries -- to explain why such an interpretation is empirically
the best.

Technical Report UTEP-CS-18-62, August 2018

Decision Making Under Interval Uncertainty: Beyond Hurwicz Pessimism-Optimism Criterion

Tran Anh Tuan, Vladik Kreinovich, and Thach Ngoc Nguyen

Published in: Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc
Trung, and Dang Van Thanh (eds.), *Beyond Traditional
Probabilistic Methods in Economics*, Springer, Cham, Switzerland,
2019, pp. 176-184.

In many practical situations, we do not know the exact value of the quantities characterizing the consequences of different possible actions. Instead, we often only known lower and upper bounds on these values, i.e., we only know intervals containing these values. To make decisions under such interval uncertainty, the Nobelist Leo Hurwicz proposed his optimism-pessimism criterion. It is known, however, that this criterion is not perfect: there are examples of actions which this criterion considers to be equivalent but which for which common sense indicates that one of them is preferable. These examples mean that Hurwicz criterion must be extended, to enable us to select between alternatives that this criterion classifies as equivalent. In this paper, we provide a full description of all such extensions.

Technical Report UTEP-CS-18-61, August 2018

How to Select the Best Paper: Towards Justification (and Possible Enhancement) of Current Semi-Heuristic Procedures

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 47, pp. 101-106.

To select the best paper at a conference or in a journal, people use reasonably standard semi-heuristic procedures like averaging scores. These procedures usually work well, but sometimes, new situations appear for which the existing procedures are not automatically applicable. Since the existing procedures are heuristic, it is often not clear how to extend them to new situations. In this paper, we provide a possible explanation for the existing procedures. This explanations enables us to naturally generalize these procedures to possible new situations.

Technical Report UTEP-CS-18-60, August 2018

Which Fuzzy Logic Operations Are Most Appropriate for Ontological Semantics: Theoretical Explanation of Empirical Observations

Vladik Kreinovich and Olga Kosheleva

To appear in: Salvatore
Attardo (ed.), *Script-Based Semantics: Foundations and
Applications*, De Gruyter Mouton, Berlin, 2020, pp. 257-267.

In several of their papers, Victor Raskin and coauthors proposed to use fuzzy techniques to make ontological semantics techniques more adequate in dealing with natural language. Specifically, they showed that the most adequate results appear when we use min as an "and"-operation and max as an "or"-operation. It is interesting that in other applications of fuzzy techniques, such as intelligent control, other versions of fuzzy techniques are the most adequate. In this chapter, we explain why the above techniques are empirically the best in the semantics case.

Technical Report UTEP-CS-18-59, August 2018

Why Triangular and Trapezoid Membership Functions: A Simple Explanation

Vladik Kreinovich, Olga Kosheleva, and Shahnaz Shabazova

Published in: Shahnaz N. Shahbazova, Janusz Kacprzyk,
Valentina Emilia Balas, and Vladik Kreinovich (eds.), *Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

In principle, in applications of fuzzy techniques, we can have different complex membership functions. In many practical applications, however, it turns out that to get a good quality result -- e.g., a good quality control -- it is sufficient to consider simple triangular and trapezoid membership functions. There exist explanations for this empirical phenomenon, but the existing explanations are rather mathematically sophisticated and are, thus, not very intuitively clear. In this paper, we provide a simple -- and thus, more intuitive -- explanation for the ubiquity of triangular and trapezoid membership functions.

Technical Report UTEP-CS-18-58, August 2018

Asset-Based Teaching and Learning with Diverse Learners in Postsecondary Settings

Erika Mein

The demographic composition of students in U.S. institutions of higher education is rapidly shifting. We know that 21st century learners are more digitally adept and more socially, economically, and culturally/linguistically diverse than at any moment historically. The University of Texas at El Paso's (UTEP) student body reflects these broader demographic changes taking place nationwide: more than 80% of UTEP students are Latina/o, with the majority identifying as bilingual; more than 50% of students are the first in their families to attend college; and roughly half of students are Pell-eligible (e.g., many of whom have annual family incomes of less than $20,000). For these reasons, UTEP is poised to be a pedagogical leader in approaches to maximizing 21st century student learning at the postsecondary level across disciplines, with a particular focus on linguistically diverse student populations.

Traditionally, Latina/o students in the K-20 pipeline -- not unlike those at UTEP -- have had to contend with deficit notions surrounding their academic performance and achievement. This deficit thinking has placed emphasis on students' deficiencies -- whether in terms of language, cognition, or motivation, among other factors -- rather than the structural conditions, such as inequitable funding for schools, that have tended to contribute to the persistent under-achievement of certain groups (Valencia, 2010).

As a challenge to deficit explanations of Latina/o student academic under-achievement, the recent 10-year student success framework adopted by UTEP, known as the UTEP Edge, advocates an asset-based approach to working with students both inside and outside of the classroom. Drawing on educational research as well community development literature, these asset-based pedagogical approaches emphasize students' individual and collective strengths, skills, and capacities as the starting point for learning and engagement. Such approaches do not claim to resolve the systemic conditions that contribute to persistent inequities experienced by minoritized students in the K-20 pipeline; rather, they are focused on reconfiguring teaching and learning to promote equity at the classroom level.

This paper provides an outline of the conceptual underpinnings of an asset-based framework for teaching and learning (ABTL), highlights key characteristics of ABTL with culturally and linguistically diverse learners, and provides examples of ABTL in the classroom, across disciplines.

Technical Report UTEP-CS-18-57, July 2018

Why Triangular and Trapezoid Membership Functions Are Efficient in Design Applications

Afshin Gholamy, Olga Kosheleva, and Vladik Kreinovich

A revised version published in *Ontology of Designing*, 2019,
Vol. 9, No. 2, pp. 253-260.

In many design problems, it is important to take into account
expert knowledge. Expert often describe their knowledge by using
imprecise ("fuzzy") natural-language words like "small". To
describe this imprecise knowledge in computer-understandable
terms, Zadeh came up with special *fuzzy* methodology --
techniques that have been successful in many applications. This
methodology starts with eliciting, from the expert, a membership
function corresponding to each imprecise term -- a function that
assigns, to each possible value of the corresponding quantity, a
degree to which this value satisfies the relevant property (e.g.,
a degree to which, in the expert's opinion, this value is small).
In principle, we can have complex membership functions. However,
somewhat surprisingly, in many applications, the simplest
membership functions -- of triangular or trapezoid shape -- turned
out to be most efficient. There exist some explanations for this
surprising empirical phenomenon, but these explanations only work
when we use the simplest possible "and"-operation -- minimum. In
this paper, we provide a new, more general explanation which is
applicable for all possible "and"-operations.

Technical Report UTEP-CS-18-56, July 2018

Summation of Divergent Infinite Series: How Natural Are the Current Tricks

Mourat Tchoshanov, Olga Kosheleva, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 50, pp. 99-106.

Infinities are usually an interesting topic for students, especially when they lead to what seems like paradoxes, when we have two different seemingly correct answers to the same question. One of such cases is summation of divergent infinite sums: on the one hand, the sum is clearly infinite, on the other hand, reasonable ideas lead to a finite value for this same sum. A usual way to come up with a finite sum for a divergent infinite series is to find a 1-parametric family of series that includes the given series for a specific value p = p0 of the corresponding parameter and for which the sum converges for some other values p. For the values p for which this sum converges, we find the expression s(p) for the resulting sum, and then we use the value s(p0) as the desired sum of the divergent infinite series. To what extent is the result reasonable depends on how reasonable is the corresponding generalizing family. In this paper, we show that from the physical viewpoint, the existing selection of the families is very natural: it is in perfect accordance with the natural symmetries.

Technical Report UTEP-CS-18-55, July 2018

Why STEM?

Olga Kosheleva and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2019,
Vol. 50, pp. 91-98.

Is the idea of combining science, technology, engineering, and mathematics into a single STEM complex a fashionable tendency, as some educator think - or is there a deep reason behind this combination? In this paper, we show that the latest developments in Theory of Computation make such a union necessary and desirable.

Technical Report UTEP-CS-18-54, July 2018

A Turing Machine Is Just a Finite Automaton with Two Stacks: A Comment on Teaching Theory of Computation

Vladik Kreinovich and Olga Kosheleva

Published in *Proceedings of the 8th International
Scientific-Practical Conference "Mathematical Education in Schools
and Universities: Innovations in the Information Space"
MATHEDU'2018*, Kazan, Russia, October 17-21, 2018, pp. 152-156.

Traditionally, when we teach Theory of Computation, we start with finite automata, we show that they are not sufficient, then we switch to pushdown automata (i.e., automata-with-stacks). Automata-with-stacks are also not sufficient, so we introduce Turing machines. The problem is that while the transition from finite automata to automata-with-stacks is reasonably natural, Turing machine are drastically different, and as a result, transition to Turing machines is difficult for some students. In this paper, we propose to solve this pedagogical problem by emphasizing that a Turing machine is, in effect, nothing else but a finite automaton with two stacks. This representation make transition to Turing machines much more natural and thus, easier to understand and to learn.

Technical Report UTEP-CS-18-53, July 2018

Composing a Cross-platform Development Environment Using Maven

Terry J. Speicher and Yoonsik Cheon

It is challenging to develop a cross-platform application, that is, an application that runs on multiple platforms. It requires not only code reuse but also an individual building or compilation for each platform, possibly with different development tools. In this paper, we show a simple approach for creating a cross-platform application using Maven, a build tool. We configure a cross-platform development environment by composing a set of platform-specific tools or integrated development environments (IDE's). The key idea of our approach is to use Maven to immediately propagate changes made using one platform tool, or IDE, to other platforms. For this, we decompose an application into platform-independent and platform-dependent parts and make the individual tools and IDE's share the platform-independent part in the form of a reusable component or library. We explain our approach in detail by creating a sample application that runs on the Java platform and the Android platform. The development environment consists of IntelliJ IDEA (for Java) and Android Studio. Our approach provides a way to set up a custom, cross-platform development environment by letting software developers pick platform-specific tools of their choices. This helps maximize code reuse in a multi-platform application and creates a continuous integration environment.

Technical Report UTEP-CS-18-52, July 2018

Optimization under Fuzzy Constraints: From a Heuristic Algorithm to an Algorithm That Always Converges

Vladik Kreinovich and Juan Carlos Figueroa-Garcia

To appear in *Proceedings of the Workshop on Engineering Applications
WEA'2018*, Medellin, Colombia, October 17-19, 2018

An efficient iterative heuristic algorithm has been used to implement Bellman-Zadeh solution to the problem of optimization under fuzzy constraints. In this paper, we analyze this algorithm, explain why it works, show that there are cases when this algorithm does not converge, and propose a modification that always converges.

Technical Report UTEP-CS-18-51, May 2018

Updated version UTEP-CS-18-51a, September 2018

Measurement-Type "Calibration" of Expert Estimates Improves Their Accuracy and Their Usability: Pavement Engineering Case Study

Edgar Daniel Rodriguez Velasquez, Carlos M. Chang Albitres, and Vladik Kreinovich

To appear in *Proceedings
of the IEEE Symposium on Computational Intelligence for
Engineering Solutions CIES'2018*, Bengaluru, India,
November 18-21, 2018.

In many applications areas, including pavement engineering, experts are used to estimate the values of the corresponding quantities. Expert estimates are often imprecise. As a result, it is difficult to find experts whose estimates will be sufficiently accurate, and for the selected experts, the accuracy is often barely within the desired accuracy. A similar situations sometimes happens with measuring instruments, but usually, if a measuring instrument stops being accurate, we do not dismiss it right away, we first try to re-calibrate it -- and this re-calibration often makes it more accurate. We propose to do the same for experts -- calibrate their estimates. On the example of pavement engineering, we show that this calibration enables us to select more qualified experts, and make estimates of the current experts more accurate.

Original file UTEP-CS-18-51 in pdf

Updated version UTEP-CS-18-51a in pdf

Technical Report UTEP-CS-18-50, May 2018

Updated version UTEP-CS-18-50a, September 2018

Current Quantum Cryptography Algorithm Is Optimal: A Proof

Oscar Galindo, Vladik Kreinovich, and Olga Kosheleva

To appear in *Proceedings
of the IEEE Symposium on Computational Intelligence for
Engineering Solutions CIES'2018*, Bengaluru, India,
November 18-21, 2018.

One of the main reasons for the current interest in quantum computing is that, in principle, quantum algorithms can break the RSA encoding, the encoding that is used for the majority secure communications -- in particular, the majority of e-commerce transactions are based on this encoding. This does not mean, of course, that with the emergence of quantum computers, there will no more ways to secretly communicate: while the existing non-quantum schemes will be compromised, there exist a quantum cryptographic scheme that will enables us to secretly exchange information. In this scheme, however, there is a certain probability that an eavesdropper will not be detected. A natural question is: can we decrease this probability by an appropriate modification of the current quantum cryptography algorithm? In this paper, we show that such a decrease is not possible: the current quantum cryptography algorithm is, in some reasonable sense, optimal.

Original file UTEP-CS-18-50 in pdf

Updated version UTEP-CS-18-50a in pdf

Technical Report UTEP-CS-18-49, May 2018

Algorithmic Need for Subcopulas

Thach Ngoc Nguyen, Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

Published in: Vladik Kreinovich and Songsak Sriboonchitta (eds.),
*Structural Changes and Their Econometric Modeling*, Springer
Verlag, Cham, Switzerland, 2019, pp. 172-181.

One of the efficient ways to describe the dependence
between random variables is by describing the corresponding
copula. For continuous distributions, the copula is uniquely
determined by the corresponding distribution. However, when the
distributions are not continuous, the copula is no longer unique,
what is unique is a *subcopula*, a function C(u,v) that has
values only for some pairs (u,v). From the purely mathematical
viewpoint, it may seem like subcopulas are not needed, since every
subcopula can be extended to a copula. In this paper, we prove,
however, that from the algorithmic viewpoint, it is, in general,
not possible to always generate a copula. Thus, from the
algorithmic viewpoint, subcopulas are needed.

Technical Report UTEP-CS-18-48, May 2018

Blockchains Beyond Bitcoin: Towards Optimal Level of Decentralization in Storing Financial Data

Thach Ngoc Nguyen, Olga Kosheleva, Vladik Kreinovich, and Hoang Phuong Nguyen

Published in: Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc
Trung, and Dang Van Thanh (eds.), *Beyond Traditional
Probabilistic Methods in Economics*, Springer, Cham, Switzerland,
2019, pp. 163-167.

In most current financial transactions, the record of each transaction is stored in three places: with the seller, with the buyer, and with the bank. This currently used scheme is not always reliable. It is therefore desirable to introduce duplication to increase the reliability of financial records. A known absolutely reliable scheme is blockchain -- originally invented to deal with bitcoin transactions -- in which the record of each financial transaction is stored at every single node of the network. The problem with this scheme is that, due to the enormous duplication level, if we extend this scheme to all financial transactions, it would require too much computation time. So, instead of sticking to the current scheme or switching to the blockchain-based full duplication, it is desirable to come up with the optimal duplication scheme. Such a scheme is provided in this paper.

Technical Report UTEP-CS-18-47, May 2018

Quantum Approach Explains the Need for Expert Knowledge: On the Example of Econometrics

Songsak Sriboonchitta, Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, and Thach Ngoc Nguyen

Published in: Vladik Kreinovich and Songsak Sriboonchitta (eds.),
*Structural Changes and Their Econometric Modeling*, Springer
Verlag, Cham, Switzerland, 2019, pp. 191-199.

The main purposes of econometrics are: to describe economic phenomena, and to find out how to regulate these phenomena to get the best possible results. There have been many successes in both purposes. Companies and countries actively use econometric models in making economic decisions. However, in spite of all the successes of econometrics, most economically important decisions are not based only on the econometric models -- they also take into account expert opinions, and it has been shown that these opinions often drastically improve the resulting decisions. Experts -- and not econometricians -- are still largely in charge of the world economics. Similarly, in many other areas of human activities, ranging from sports to city planning to teaching, in spite of all the successes of mathematical models, experts are still irreplaceable. But why? In this paper, we explain this phenomenon by taking into account that many complex systems are well described by quantum equations, and in quantum physics, the best computational results are obtained when we allow the system to make kind of imprecise queries -- the types that experts ask.

Technical Report UTEP-CS-18-46, May 2018

Why Quantum (Wave Probability) Models Are a Good Description of Many Non-Quantum Complex Systems, and How to Go Beyond Quantum Models

Miroslav Svitek, Olga Kosheleva, Vladik Kreinovich, and Thach Ngoc Nguyen

Published in: Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc
Trung, and Dang Van Thanh (eds.), *Beyond Traditional
Probabilistic Methods in Economics*, Springer, Cham, Switzerland,
2019, pp. 168-175.

In many practical situations, it turns out to be beneficial to use techniques from quantum physics in describing non-quantum complex systems. For example, quantum techniques have been very successful in econometrics and, more generally, in describing phenomena related to human decision making. In this paper, we provide a possible explanation for this empirical success. We also show how to modify quantum formulas to come up with an even more accurate descriptions of the corresponding phenomena.

Technical Report UTEP-CS-18-45, May 2018

Why Hammerstein-Type Block Models Are So Efficient: Case Study of Financial Econometrics

Thongchai Dumrongpokaphan, Afshin Gholamy, Vladik Kreinovich, and Hoang Phuong Nguyen

Published in: Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc
Trung, and Dang Van Thanh (eds.), *Beyond Traditional
Probabilistic Methods in Economics*, Springer, Cham, Switzerland,
2019, pp. 129-136.

In the first approximation, many economic phenomena can be described by linear systems. However, many economic processes are non-linear. So, to get a more accurate description of economic phenomena, it is necessary to take this non-linearity into account. In many economic problems, among many different ways to describe non-linear dynamics, the most efficient turned out to be Hammerstein-type block models, in which the transition from one moment of time to the next consists of several consequent blocks: linear dynamic blocks and blocks describing static non-linear transformations. In this paper, we explain why such models are so efficient in econometrics.

Technical Report UTEP-CS-18-44, May 2018

Why the Best Predictive Models Are Often Different from the Best Explanatory Models: A Theoretical Explanation

Songsak Sriboonchitta, Luc Longpre, Vladik Kreinovich, and Thongchai Dumrongpokaphan

Published in: Vladik Kreinovich and Songsak Sriboonchitta (eds.),
*Structural Changes and Their Econometric Modeling*, Springer
Verlag, Cham, Switzerland, 2019, pp. 163-171.

Traditionally, in statistics, it was implicitly assumed that models which are the best predictors also have the best explanatory power. Lately, many examples have been provided that show that the best predictive models are often different from the best explanatory models. In this paper, we provide a theoretical explanation for this difference.

Technical Report UTEP-CS-18-43, May 2018

How to Take Expert Uncertainty into Account: Economic Approach Illustrated by Pavement Engineering Applications

Edgar Daniel Rodriguez Velasquez, Carlos M. Chang Albitres, Thach Ngoc Nguyen, Olga Kosheleva, and Vladik Kreinovich

Published in: Vladik Kreinovich and Songsak Sriboonchitta (eds.),
*Structural Changes and Their Econometric Modeling*, Springer
Verlag, Cham, Switzerland, 2019, pp. 182-190.

In many application areas, we rely on expert estimates. For example, in pavement engineering, we often rely on expert graders to gauge the condition of road segments and to see which repairs are needed. Expert estimates are imprecise; it is desirable to take the resulting uncertainty into account when making the corresponding decisions. The traditional approach is: to first apply the traditional statistical methods to get the most accurate estimate, and then to take the corresponding uncertainty into account when estimating the economic consequences of the resulting decision. On the example of pavement engineering applications, we show that it is beneficial to apply the economic approach from the very beginning. The resulting formulas are in good accordance with the general way how people make decisions in the presence of risk.

Technical Report UTEP-CS-18-42, May 2018

Why Threshold Models: A Theoretical Explanation

Thongchai Dumrongpokaphan, Vladik Kreinovich, and Songsak Sriboonchitta

Published in: Vladik Kreinovich, Nguyen Ngoc Thach, Nguyen Duc
Trung, and Dang Van Thanh (eds.), *Beyond Traditional
Probabilistic Methods in Economics*, Springer, Cham, Switzerland,
2019, pp. 137-145.

Many economic phenomena are well described by linear
models. In such models, the predicted value of the desired
quantity -- e.g., the future value of an economic characteristic
-- linearly depends on the current values of this and related
economic characteristic and on the numerical values of external
effects. Linear models have a clear economic interpretation: they
correspond to situations when the overall effect does not depend,
e.g., on whether we consider a loose federation as a single
country or as several countries. While linear models are often
reasonably accurate, to get more accurate predictions, we need to
take into account that real-life processes are nonlinear. To take
this nonlinearity into account, economists use piece-wise linear
(*threshold*) models, in which we have several different
linear dependencies in different domains. Surprisingly, such
piece-wise linear models often work better than more traditional
models of non-linearity -- e.g., models that take quadratic terms
into account. In this paper, we provide a theoretical explanation
for this empirical success of threshold models.

Technical Report UTEP-CS-18-41, May 2018

Qualitative conditioning in an interval-based possibilistic setting

Salem Benferhat, Vladik Kreinovich, Amelie Levray, and Karim Tabia

Published in *Fuzzy Sets and Systems*, 2018, Vol. 343, No. 1,
pp. 35-49.

Possibility theory and possibilistic logic are well-known uncertainty frameworks particularly suited for representing and reasoning with uncertain, partial and qualitative information. Belief update plays a crucial role when updating beliefs and uncertain pieces of information in the light of new evidence. This paper deals with conditioning uncertain information in a qualitative interval-valued possibilistic setting. The first important contribution concerns a set of three natural postulates for conditioning interval-based possibility distributions. We show that any interval-based conditioning satisfying these three postulates is necessarily based on the set of compatible standard possibility distributions. The second contribution consists in a proposal of efficient procedures to compute the lower and upper endpoints of the conditional interval-based possibility distribution while the third important contribution provides a syntactic counterpart of conditioning interval-based possibility distributions in case where these latter are compactly encoded in the form of possibilistic knowledge bases.

Technical Report UTEP-CS-18-40, April 2018

Why Bellman-Zadeh Approach to Fuzzy Optimization

Olga Kosheleva and Vladik Kreinovich

Published in *Applied Mathematical Sciences*, 2018, Vol. 12,
No. 11, pp. 517-522.

In many cases, we need to select the best of the possible alternatives, but we do not know for sure which alternatives are possible and which are not possible. Instead, for each alternative x, we have a subjective probability p(x) that this alternative is possible. In 1970, Richard Bellman and Lotfi Zadeh proposed a heuristic method for selecting an alternative under such uncertainty. Interestingly, this method works very well in many practical applications, while similarly motivated alternative formulas do not work so well. In this paper, we explain the empirical success of the Bellman-Zadeh approach by showing that its formulas can be derived from the general decision theory recommendations.

Technical Report UTEP-CS-18-39, April 2018

How Interval Measurement Uncertainty Affects the Results of Data Processing: A Calculus-Based Approach to Computing the Range of a Box

Andrew Pownuk and Vladik Kreinovich

Published in * Mathematical Structures and Modeling*, 2018,
Vol. 46, pp. 118-124.

In many practical applications, we are interested in the values of
the quantities y_{1}, ..., y_{m} which are difficult (or even
impossible) to measure directly. A natural idea to estimate these
values is to find easier-to-measure related quantities
x_{1}, ..., x_{n} and to use the known relation to estimate the
desired values y_{i}. Measurements come with uncertainty, and
often, the only thing we know about the actual value of each
auxiliary quantity x_{i} is that it belongs to the interval
[X_{i} − Δ_{i}, X_{i} +
Δ_{i}], where X_{i} is
the measurement result, and Δ_{i} is the upper bound on the
absolute value of the measurement error Δ x_{i} =
X_{i} − x_{i}. In
such situations, instead of a single value of a tuple
y = (y_{1}, ..., y_{m}), we have a range of
possible values. In this
paper, we provide calculus-based algorithms for computing this
range.

Technical Report UTEP-CS-18-38, April 2018

Updated version UTEP-18-38a, June 2018

Soft Computing Ideas Can Help Earthquake Geophysics

Solymar Ayala, Aaron Velasco, and Vladik Kreinovich

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 47, pp. 91-100.

Earthquakes can be devastating, thus it is important to gain a good understanding of the corresponding geophysical processing. One of the challenges in geophysics is that we cannot directly measure the corresponding deep-earth quantities, we have to rely on expert knowledge, knowledge which often comes in terms of imprecise ("fuzzy") words from natural language. To formalize this knowledge, it is reasonable to use techniques that were specifically designed for such a formalization -- namely, fuzzy techniques, In this paper, we formulate the problem of optimally representing such knowledge. By solving the corresponding optimization problem, we conclude that the optimal representation involves using piecewise-constant functions. For geophysics applications, this means that we need to go beyond tectonic plates to explicitly consider parts of the plates that move during the earthquake. We argue that such an analysis will lead to a better understanding of earthquake-related geophysics.

Original file UTEP-CS-18-38 in pdf

Updated version UTEP-CS-18-38a in pdf

Technical Report UTEP-CS-18-37, April 2018

Updated version UTEP-CS-18-37a, June 2018

Fuzzy Ideas Explain a Complex Heuristic Algorithm for Gauging Pavement Conditions

Edgar Daniel Rodriguez Velasquez, Carlos M. Chang Albitres, and Vladik Kreinovch

Published in *Mathematical Structures and Modeling*, 2018,
Vol. 47, pp. 82-90.

To gauge pavement conditions, researchers have come up with a complex heuristic algorithm that combines several expert estimates of pavement characteristics into a single index -- which is well correlated with the pavement's durability and other physical characteristics. While empirically, this algorithm works well, it lacks physical or mathematical justification beyond being a good fit for the available data. This lack of justification decreases our confidence in the algorithm's results -- since it is known that often, empirically successful heuristic algorithms need change when the conditions change. To increase the practitioners' confidence in the resulting pavement condition estimates, it is therefore desirable to come up with a theoretical justification for this algorithm. In this paper, we show that by using fuzzy techniques, it is possible to come up with the desired justification.

Original file UTEP-CS-18-37 in pdf

Updated version UTEP-CS-18-37a in pdf

Technical Report UTEP-CS-18-36, April 2018

Analysis of Prosody Around Turn Starts

Gerardo Cervantes and Nigel G. Ward

We are interested in enabling a robot to communicate with more natural timings: to take turns more appropriately. LSTM models have sometime been effective for this, but we found that this to be not helpful for some tasks. This technical report we look for factors that may explain this difference, by examining statistically the prosodic feature values in the vicinity of turn shift in the data. We observe that the apparent informativeness of prosodic features varies greatly from one dataset to another.

Technical Report UTEP-CS-18-35, April 2018

Updated version UTEP-CS-18-35a, June 2018

When Is Propagation of Interval and Fuzzy Uncertainty Feasible?

Vladik Kreinovich, Andrew M. Pownuk, Olga Kosheleva, and Aleksandra Belina

Published in *Proceedings of the 8th International Workshop on
Reliable Engineering Computing REC'2018*, Liverpool, UK,
July 16-18, 2018.

In many engineering problems, to estimate the desired quantity, we process measurement results and expert estimates. Uncertainty in inputs leads to the uncertainty in the result of data processing. In this paper, we show how the existing feasible methods for propagating the corresponding interval and fuzzy uncertainty can be extended to new cases of potential practical importance.

Original file UTEP-CS-18-35 in pdf

Updated version UTEP-CS-18-35a in pdf

Technical Report UTEP-CS-18-34, April 2018

Updated version UTEP-CS-18-34a, June 2018

What Is the Economically Optimal Way to Guarantee Interval Bounds on Control?

Alfredo Vaccaro, Martine Ceberio, and Vladik Kreinovich

Published in *Proceedings of the 8th International Workshop on
Reliable Engineering Computing REC'2018*, Liverpool, UK,
July 16-18, 2018.

For control under uncertainty, interval methods enable
us to find a box
B=[u^{−}_{1},u^{+}_{1}] X ... X
[u^{−}_{n},u^{+}_{n}]
for which any control u from B has
the desired properties -- such as stability. Thus, in real-life
control, we need to make sure that u_{i} is in
[u^{−}_{i},u^{+}_{i}]
for all parameters u_{i} describing control.
In this paper, we describe the economically optimal way of
guaranteeing these bounds.

Original file UTEP-CS-18-34 in pdf

Updated version UTEP-CS-18-34a in pdf

Technical Report UTEP-CS-18-33, April 2018

Economics of Commitment: Why Giving Away Some Freedom Makes Sense

Vladik Kreinovich, Olga Kosheleva, Mahdokht Afravi, Genesis Bejarano, and Marisol Chacon

Published in * Mathematical Structures and Modeling*, 2018,
Vol. 46, pp. 73-87.

In general, the more freedom we have, the better choices we can make, and thus, the better possible economic outcomes. However, in practice, people often artificially restrict their future options by making a commitment. At first glance, commitments make no economic sense, and so their ubiquity seems puzzling. Our more detailed analysis shows that commitment often makes perfect economic sense: namely, it is related to the way we take future gains and losses into account. With the traditionally assumed exponential discounting, commitment indeed makes no economic sense, but with the practically observed hyperbolic discounting, commitment is indeed often economically beneficial.

Technical Report UTEP-CS-18-32, April 2018

Why Under Stress Positive Reinforcement Is More Effective? Why Optimists Study Better? Why People Become Restless? Simple Utility-Based Explanations

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in * Mathematical Structures and Modeling*, 2018,
Vol. 46, pp. 66-72.

In this paper, we use the utility-based approach to decision making to provide simple answers to the following three questions: Why under stress positive reinforcement is more effective? Why optimists study better? Why people become restless?

Technical Report UTEP-CS-18-31, April 2018

Towards Foundations of Interval and Fuzzy Uncertainty

Mahdokht Afravi, Kehinde Akinola, Fredrick Ayivor, Ramon Bustamante, Erick Duarte, Ahnaf Farhan, Martha Garcia, Govinda K.C., Jeffrey Hope, Olga Kosheleva, Vladik Kreinovich, Jose Perez, Francisco Rodriguez, Christian Servin, Eric Torres, and Jesus Tovar

Published in *Journal of Uncertain Systems*, 2018, Vol. 12,
No. 3, pp. 164-170.

In this paper, we provide a theoretical explanation for many
aspects of interval and fuzzy uncertainty: Why boxes for multi-D
uncertainty? What if we only know Hurwicz's optimism-pessimism
parameter with interval uncertainty? Why swarms of agents are
better than clouds? Which confidence set is the most robust? Why
μ^{p} in fuzzy clustering? How do degrees of confidence change
with time? What is a natural interpretation of Pythagorean and
fuzzy degrees of confidence?

Technical Report UTEP-CS-18-30, April 2018

Why Asset-Based Approach to Teaching Is More Effective than the Usual Deficit-Based Approach, and Why The New Approach Is Not Easy to Implement: A Simple Geometric Explanation

Olga Kosheleva and Vladik Kreinovich

Published in *Geombinatorics*, 2018, Vol. 28, No. 2,
pp. 99-105.

Traditional approach to teaching is based on uncovering
*deficiencies* in student's knowledge and working on these
deficiencies. Lately, it has been shown that a more efficient
approach to education is instead when we start with the student's
strengths (*assets*), and use these strengths to teach the
students; however, this asset-based approach is not easy to
implement. In this paper, we provide a simple geometric
explanation of why the asset-based approach to teaching is more
efficient and why it is not easy to implement.

Technical Report UTEP-CS-18-29, March 2018

Why Encubation?

Vladik Kreinovich, Rohan Baingolkar, Swapnil S. Chauhan, and Ishtjot S. Kamboj

Published in *International Journal of Computing and
Optimization*, 2018, Vol. 5, No. 1, pp. 5-8.

It is known that some algorithms are feasible, and some take too
long to be practical/ For example, if the running time of an
algorithm is 2^{n}, where n = len(x) is the bit size of the
input x, then already for n = 500, the computation time exceeds
the lifetime of the Universe. In computer science, it is usually
assumed that an algorithm A is feasible if and only if it is
*polynomial-time*, i.e., if its number of computational steps
t_{A}(x) on any input x is bounded by a polynomial P(n) of the
input length n = len}(x).

An interesting *encubation* phenomenon is that once we succeed
in finding a polynomial-time algorithm for solving a problem,
eventually it turns out to be possible to further decrease its
computation time until we either reach the cubic time
t_{A}(x) ~ n^{3} or reach some even faster time
n^{α} for
α < 3.

In this paper, we provide a possible physics-based explanation for the encubation phenomenon.

Technical Report UTEP-CS-18-28, March 2018

Gartner's Hype Cycle: A Simple Explanation

Jose Perez and Vladik Kreinovich

Published in *International Journal of Computing and
Optimization*, 2018, Vol. 5, No. 1, pp. 1-4.

In the ideal world, any innovation should be gradually accepted. It is natural that initially some people are reluctant to adopt a new largely un-tested idea, but as more and more evidence appears that this new idea works, we should see a gradual increase in number of adoptees -- until the idea becomes universally accepted.

In real life, the adoption process is not that smooth. Usually,
after the few first successes, the idea is over-hyped, it is
adopted in situations way beyond the inventors' intent. In these
remote areas, the new idea does not work well, so we have a
natural push-back, when the idea is adopted to a much less extent
than it is reasonable. Only after these wild oscillations, the
idea is finally universally adopted. These oscillations are known
as *Gartner's hype cycle.*

A similar phenomenon is known in economics: when a new positive information about a stock appears, the stock price does not rise gradually: at first, it is somewhat over-hyped and over-priced, and only then, it moves back to a reasonable value.

In this paper, we provide a simple explanation for this oscillation phenomenon.

Technical Report UTEP-CS-18-27, March 2018

Why Zipf's Law: A Symmetry-Based Explanation

Daniel Cervantes, Olga Kosheleva, and Vladik Kreinovich

Published in * International Mathematical Forum*, 2018,
Vol. 13, No. 6, pp. 255-258.

In many practical situations, we have probability distributions
for which, for large values of the corresponding quantity x, the
probability density has the form ρ(x) ~ x^{−α}
for
some α > 0. While, in principle, we have laws corresponding
to different α, most frequently, we encounter situations --
first described by Zipf for linguistics -- when α is close to 1.
The fact that Zipf's has appeared frequently in many different
situations seems to indicate that there must be some fundamental
reason behind this law. In this paper, we provide a possible
explanation.

Technical Report UTEP-CS-18-26, March 2018

Working on One Part at a Time is the Best Strategy for Software Production: A Proof

Francisco Zapata, Maliheh Zargaran, and Vladik Kreinovich

Published in *Proceedings of the 11th International Workshop on
Constraint Programming and Decision Making CoProd'2018*, Tokyo,
Japan, September 10, 2018; detailed version to appear in:
Martine Ceberio and Vladik Kreinovich (eds.), *Decision Making
under Constraints*, Springer Verlag, Cham, Switzerland.

When a company works on a large software project, it can often start recouping its investments by selling intermediate products with partial functionality. With this possibility in mind, it is important to schedule work on different software parts so as to maximize the profit. These exist several algorithms for solving the corresponding optimization problem, and in all the resulting plans, at each moment of time, we work on one part of software at a time. In this paper, we prove that this one-part-at-a-time property holds for all optimal plans.

Technical Report UTEP-CS-18-25, March 2018

Towards Foundations of Fuzzy Utility: Taking Fuzziness into Account Naturally Leads to Intuitionistic Fuzzy Degrees

Christian Servin and Vladik Kreinovich

Published in *Proceedings of the 2018 Annual Conference of the
North American Fuzzy Information Processing Society
NAFIPS'2018*, Fortaleza, Brazil, July 4-6, 2018

The traditional utility-based decision making theory assumes that for every two alternatives, the user is either absolutely sure that the first alternative is better, or that the second alternative is better, or that the two alternatives are absolutely equivalent. In practice, when faced with alternatives of similar value, people are often not fully sure which of these alternatives is better. To describe different possible degrees of confidence, it is reasonable to use fuzzy logic techniques. In this paper, we show that, somewhat surprisingly, a reasonable fuzzy modification of the traditional utility elicitation procedure naturally leads to intuitionistic fuzzy degrees.

Technical Report UTEP-CS-18-24, March 2018

How to Gauge Repair Risk?

Francisco Zapata and Vladik Kreinovich

Published in *Proceedings of the 2018 Annual Conference of the
North American Fuzzy Information Processing Society
NAFIPS'2018*, Fortaleza, Brazil, July 4-6, 2018

At present, there exist several automatic tools that, given a software, find locations of possible defects. A general tool does not take into account a specificity of a given program. As a result, while many defects discovered by this tool can be truly harmful, many uncovered alleged defects are, for this particular software, reasonably (or even fully) harmless. A natural reaction is to repair all the alleged defects, but the problem is that every time we correct a program, we risk introducing new faults. From this viewpoint, it is desirable to be able to gauge the repair risk. This will help use decide which part of the repaired code is most likely to fail and thus, needs the most testing, and even whether repairing a probably harmless defect is worth an effort at all -- if as a result, we increase the probability of a program malfunction. In this paper, we analyze how repair risk can be gauged.

Technical Report UTEP-CS-18-23, March 2018

Updated version UTEP-CS-18-23a, May 2018

How Intelligence Community Interprets Imprecise Evaluative Linguistic Expressions, and How to Justify This Empirical-Based Interpretation

Olga Kosheleva and Vladik Kreinovich

Published in: Oleg Chertov, Tymofiy Mylovanov, Yuriy Kondratenko, Janusz Kacprzyk, Vladik Kreinovich, and Vadim Stefanuk (eds.), "Recent Developments in Data Science and Intelligent Analysis of Information, Proceedings of the XVIII International Conference on Data Science and Intelligent Analysis of Information ICDSIAI'2018, Kiev, Ukraine, June 4-7, 2018", Springer Verlag, Cham, Switzerland, 2018, pp. 81-89.

To provide a more precise meaning to imprecise evaluative linguistic expressions like "probable" or "almost certain", researchers analyzed how often intelligence predictions hedged by each corresponding word turned out to be true. In this paper, we provide a theoretical explanation for the resulting empirical frequencies.

Original file UTEP-CS-18-23 in pdf

Updated version UTEP-CS-18-23a in pdf

Technical Report UTEP-CS-18-22, March 2018

Updated version UTEP-CS-18-22a, May 2018

How to Explain Empirical Distribution of Software Defects by Severity

Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in: Oleg Chertov, Tymofiy Mylovanov, Yuriy Kondratenko, Janusz Kacprzyk, Vladik Kreinovich, and Vadim Stefanuk (eds.), "Recent Developments in Data Science and Intelligent Analysis of Information, Proceedings of the XVIII International Conference on Data Science and Intelligent Analysis of Information ICDSIAI'2018, Kiev, Ukraine, June 4-7, 2018", Springer Verlag, Cham, Switzerland, 2018, pp. 90-99.

In the last decades, several tools have appeared that, given a software package, mark possible defects of different potential severity. Our empirical analysis has shown that in most situations, we observe the same distribution or software defects by severity. In this paper, we present this empirical distribution, and we use interval-related ideas to provide an explanation for this empirical distribution.

Original file UTEP-CS-18-22 in pdf

Updated version UTEP-CS-18-22a in pdf

Technical Report UTEP-CS-18-21, March 2018

How to Explain the Results of the Richard Thaler's 1997 Financial Times Contest

Olga Kosheleva and Vladik Kreinovich

Published in *International Mathematical Forum*, 2018, Vol. 13,
No. 1, pp. 21-214.

In 1997, by using a letter published in Financial Times, Richard H. Thaler, the 2017 Nobel Prize winner in Economics, performed the following experiment: he asked readers to submit numbers from 0 to 100, so that the person whose number is the closest to 2/3 of the average will be the winner. An intuitive answer is to submit 2/3 of the average (50), i.e., 33 1/3. A logical answer, as can be explained, is to submit 0. The actual winning submission was -- depending on how we count -- 12 or 13. In this paper, we propose a possible explanation for this empirical result.

Technical Report UTEP-CS-18-20, March 2018

Why Superforecasters Change Their Estimates on Average by 3.5%: A Possible Theoretical Explanation

Olga Kosheleva and Vladik Kreinovich

Published in *International Mathematical Forum*, 2018, Vol. 13,
No. 4, pp. 207-210.

A recent large-scale study of people's forecasting ability has
shown that there is a small group of *superforecasters*, whose
forecasts are significantly more accurate than the forecasts of an
average person. Since forecasting is important in many application
areas, researchers have studied what exactly the supreforecasters
do differently -- and how we can learn from them, so that we will
be able to forecast better. One empirical fact that came from this
study is that, in contrast to most people, superforecasters make
much smaller adjustements to their probability estimates. On
average, their average probability change is 3.5%. In this paper,
we provide a possible theoretical explanation for this empirical
value.

Technical Report UTEP-CS-18-19, March 2018

Virtual Agent Interaction Framework (VAIF): A Tool for Rapid Development of Social Agents

Ivan Gris and David Novick

Creating an embodied virtual agent is often a complex process. It involves 3D modeling and animation skills, advanced programming knowledge, and in some cases arti.cial intelligence or the integration of complex interaction models. Features like lip-syncing to an audio .le, recognizing the users' speech, or having the character move at certain times in certain ways, are inaccessible to researchers that want to build and use these agents for education, research, or industrial uses. VAIF, the Virtual Agent Interaction Framework, is an extensively documented system that attempts to bridge that gap and provide inexperienced researchers the tools and means to develop their own agents in a centralized, lightweight platform that provides all these features through a simple interface within the Unity game engine. In this paper we present the platform, describe its features, and provide a case study where agents were developed and deployed in mobile-device, virtual-reality, and augmented-reality platforms by users with no coding experience.

Technical Report UTEP-CS-18-18, February 2018

Reverse Mathematics Is Computable for Interval Computations

Martine Ceberio, Olga Kosheleva, and Vladik Vladik Kreinovich

Published in *Proceedings of the 11th International Workshop on
Constraint Programming and Decision Making CoProd'2018*, Tokyo,
Japan, September 10, 2018; detailed version will appear in: Martine
Ceberio and Vladik Kreinovich, tr16-49a.tex, *.pdf,, *Decision Making under
Constraints*,
Springer Verlag, Cham, Switzerland.

For systems of equations and/or inequalities under interval
uncertainty, interval computations usually provide us with a box
whose all points satisfy this system. Reverse mathematics means
finding necessary and sufficient conditions, i.e., in this case,
describing the set of *all* the points that satisfy the given
system. In this paper, we show that while we cannot always exactly
describe this set, it is possible to have a general algorithm
that, given ε > 0, provides an
ε-approximation to the desired solution set.

Technical Report UTEP-CS-18-17, February 2018

Italian Folk Multiplication Algorithm Is Indeed Better: It Is More Parallelizable

Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the 11th International Workshop on
Constraint Programming and Decision Making CoProd'2018*, Tokyo,
Japan, September 10, 2018; revised version will appear in
Martine Ceberio and Vladik Kreinovich, tr16-49a.tex, *.pdf,,
*Decision Making under Constraints*, Springer Verlag, Cham,
Switzerland.

Traditionally, many ethnic groups had their own versions of arithmetic algorithms. Nowadays, most of these algorithms are studied mostly as pedagogical curiosities, as an interesting way to make arithmetic more exciting to the kids: by applying to their patriotic feelings -- if they are studying the algorithms traditionally used by their ethnic group -- or simply to their sense of curiosity. Somewhat surprisingly, we show that one of these algorithms -- a traditional Italian multiplication algorithm -- is actually in some reasonable sense better than the algorithm that we all normally use -- namely, it is easier to parallelize.

Technical Report UTEP-CS-18-16, February 2018

From Traditional Neural Networks to Deep Learning: Towards Mathematical Foundations of Empirical Successes

Vladik Kreinovich Published in: Shahnaz N. Shahbazova, Janusz Kacprzyk, Valentina Emilia Balas, and Vladik Kreinovich (eds.),

How do we make computers think? To make machines that fly, it is reasonable to look at the creatures that know how to fly: the birds. To make computers think, it is reasonable to analyze how we think -- this is the main origin of neural networks. At first, one of the main motivations was speed -- since even with slow biological neurons, we often process information fast. The need for speed motivated traditional 3-layer neural networks. At present, computer speed is rarely a problem, but accuracy is -- this motivated deep learning. In this paper, we concentrate on the need to provide mathematical foundations for the empirical success of deep learning.

Technical Report UTEP-CS-18-15, February 2018

How to Monitor Possible Side Effects of Enhanced Oil Recovery Process

Jose Manuel Dominguez Esquivel, Solymar Ayala Cortez, Aaron Velasco, and Vladik Kreinovich

Published in: Shahnaz N. Shahbazova, Janusz Kacprzyk,
Valentina Emilia Balas, and Vladik Kreinovich (eds.), *Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

To extract all the oil from a well, petroleum engineers pump hot reactive chemicals into the well. This enhanced oil recovery process needs to be thoroughly monitored, since the aggressively hot liquid can seep out and, if unchecked, eventually pollute the sources of drinking water. At present, to monitor this process, engineers measure the seismic waves generated when the liquid fractures the minerals. However, the resulting seismic waves are weak in comparison with the background noise. Thus, the accuracy with which we can locate the spreading liquid based on these weak signals is low, and we get only a very crude approximate understanding of how the liquid propagates. To get a more accurate picture of the liquid propagation, we propose to use active seismic analysis: namely, we propose to generate strong seismic waves and use a large-N array of sensors to observe their propagation.

Technical Report UTEP-CS-18-14, February 2018

Optimization of Quadratic Forms and t-norm Forms on Interval Domain and Computational Complexity

Milan Hladik, Michal Cerny, and Vladik Kreinovich

Published in: Shahnaz N. Shahbazova, Janusz Kacprzyk,
Valentina Emilia Balas, and Vladik Kreinovich (eds.), *Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

We consider the problem of maximization of a
quadratic form over a box. We identify the NP-hardness boundary
for sparse quadratic forms: the problem is polynomially
solvable for O(log n) nonzero entries, but it is NP-hard if the
number of nonzero entries is of the order n^{ε} for
an arbitrarily
small ε > 0. Then we inspect further polynomially solvable
cases. We define a sunflower graph over the quadratic form
and study efficiently solvable cases according to the shape of
this graph (e.g. the case with small sunflower leaves or the
case with a restricted number of negative entries). Finally, we
define a generalized quadratic form, called t-norm form, where
the quadratic terms are replaced by t-norms. We prove that
the optimization problem remains NP-hard with an arbitrary
Lipschitz continuous t-norm.

Technical Report UTEP-CS-18-13, February 2018

Which t-Norm Is Most Appropriate for Bellman-Zadeh Optimization

Vladik Kreinovich, Olga Kosheleva, and Shahnaz Shahbazova

*Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

In 1970, Richard Bellman and Lotfi Zadeh proposed a method for finding the maximum of a function under fuzzy constraints. The problem with this method is that it requires the knowledge of the minimum and the maximum of the objective function over the corresponding crisp set, and minor changes in this crisp set can lead to a drastic change in the resulting maximum. It is known that if we use a product "and"-operation (t-norm), the dependence on the maximum disappears. Natural questions are: what if we use other t-norms? Can we eliminate the dependence on the minimum? What if we use a different scaling in our derivation of the Bellman-Zadeh formula? In this paper, we provide answers to all these questions. It turns out that the product is the only t-norm for which there is no dependence on maximum, that it is impossible to eliminate the dependence on the minimum, and we also provide t-norms corresponding to the use of general scaling functions.

Technical Report UTEP-CS-18-12, February 2018

When Is Data Processing Under Interval and Fuzzy Uncertainty Feasible: What If Few Inputs Interact? Does Feasibility Depend on How We Describe Interaction?

Milan Hladik, Michal Cerny, and Vladik Kreinovich

*Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

It is known that, in general, data processing under interval and
fuzzy uncertainty is NP-hard -- which means that, unless P = NP,
no feasible algorithm is possible for computing the accuracy of
the result of data processing. It is also known that the
corresponding problem becomes feasible if the inputs do not
interact with each other, i.e., if the data processing algorithm
computes the sum of n functions, each depending on only one of
the n inputs. In general, inputs x_{i} and
x_{j} interact. If we
take into account all possible interactions, and we use bilinear
functions x_{i} *
x_{j} to describe this interaction, we get an
NP-hard problem. This raises two natural questions: what if only a
few inputs interact? What if the interaction is described by some
other functions? In this paper, we show that the problem remains
NP-hard if we use different formulas to describe the inputs'
interaction, and it becomes feasible if we only have O(log(n))
interacting inputs -- but remains NP-hard of the number of inputs
is O(n^{ϵ}) for any ϵ > 0.

Technical Report UTEP-CS-18-11, February 2018

Why Skew Normal: A Simple Pedagogical Explanation

Jose Guadalupe Flores Muniz, Vyacheslav V. Kalashnikov, Nataliya Kalashnykova, Olga Kosheleva, and Vladik Kreinovich

Published in *International Journal of Intelligent Technologies
and Applied Statistics*, 2018, Vol. 11, No. 2, pp. 113-120.

In many practical situations, we only know a few first moments of a random variable, and out of all probability distributions which are consistent with this information, we need to select one. When we know the first two moments, we can use the Maximum Entropy approach and get normal distribution. However, when we know the first three moments, the Maximum Entropy approach doe snot work. In such situations, a very efficient selection is a so-called skew normal distribution. However, it is not clear why this particular distribution should be selected. In this paper, we provide an explanation for this selection.

Technical Report UTEP-CS-18-10, February 2018

Revised version UTEP-CS-18-10c, November 2019

Ellipsoidal and Gaussian Kalman Filter Model for Discrete-Time Nonlinear Systems

Ligang Sun, Hamza Alkhatib, Boris Kargoll, Vladik Kreinovich, and Ingo Neumann

To appear in *Mathematics* journal.

In this paper, we propose a new technique -- called Ellipsoidal and Gaussian Kalman filter -- for state estimation of discrete-time nonlinear systems in situations when for some parts of uncertainty, we know the probability distributions, while for other parts of uncertainty, we only know the bounds (but we do not know the corresponding probabilities). Similarly to the usual Kalman filter, our algorithm is iterative: on each iteration, we first predict the state at the next moment of time, and then we use measurement results to correct the corresponding estimates. On each correction step, we solve a convex optimization problem to find the optimal estimate for the system's state (and the optimal ellipsoid for describing the systems's uncertainty). Testing our algorithm on several highly nonlinear problems has shown that the new algorithm performs better the extended Kalman filter technique -- the state estimation technique usually applied to such nonlinear problems.

Original version UTEP-CS-18-10 in pdf

Revised version UTEP-CS-18-10c in pdf

Technical Report UTEP-CS-18-09, February 2018

Why 70/30 or 80/20 Relation Between Training and Testing Sets: A Pedagogical Explanation

Afshin Gholamy, Vladik Kreinovich, and Olga Kosheleva

Published in *International Journal of Intelligent Technologies
and Applied Statistics*, 2018, Vol. 11, No. 2, pp. 105-111.

When learning a dependence from data, to avoid overfitting, it is important to divide the data into the training set and the testing set. We first train our model on the training set, and then we use the data from the testing set to gauge the accuracy of the resulting model. Empirical studies show that the best results are obtained if we use 20-30% of the data for testing, and the remaining 70-80% of the data for training. In this paper, we provide a possible explanation for this empirical result.

Technical Report UTEP-CS-18-08, February 2018

Why Learning Has Aha-Moments and Why We Should Also Reward Effort, Not Just Results

Gerardo Uranga, Vladik Kreinovich, and Olga Kosheleva

Published in *International Journal of Intelligent Technologies
and Applied Statistics*, 2018, Vol. 11, No. 2, pp. 97-103.

Traditionally, in machine learning, the quality of the result improves steadily with time (usually slowly but still steadily). However, as we start applying reinforcement learning techniques to solve complex tasks -- such as teaching a computer to play a complex game like Go -- we often encounter a situation in which for a long time, then is no improvement, and then suddenly, the system's efficiency jumps almost to its maximum. A similar phenomenon occurs in human learning, where it is known as the aha-moment. In this paper, we provide a possible explanation for this phenomenon, and show that this explanation leads to the need to reward students for effort as well, not only for their results.

Technical Report UTEP-CS-18-07, February 2018

Why Burgers Equation: Symmetry-Based Approach

Leobardo Valera, Martine Ceberio, and Vladik Kreinovich

Published in *Decision Making
under Constraints*, Springer Verlag, Cham, Switzerland.

In many application areas ranging from shock waves to acoustics, we encounter the same partial differential equation known as the Burgers' equation. The fact that the same equation appears in different application domains, with different physics, makes us conjecture that it can be derived from the fundamental principles. Indeed, in this paper, we show that this equation can be uniquely determined by the corresponding symmetries.

Technical Report UTEP-CS-18-06, February 2018

Lotfi Zadeh: a Pioneer in AI, a Pioneer in Statistical Analysis, a Pioneer in Foundations of Mathematics, and a True Citizen of the World

Vladik Kreinovich

Published in *International Journal of Intelligent Technologies
and Applied Statistics*, 2018, Vol. 11, No. 2, pp. 87-96.

Everyone knows Lotfi Zadeh as the Father of Fuzzy Logic. There have been -- and will be -- many papers on this important topic. What I want to emphasize in this paper is that his ideas go way beyond fuzzy logic:

- he was a pioneer in AI;
- he was a pioneer in statistical analysis; and
- he was a pioneer in foundations of mathematics.

Technical Report UTEP-CS-18-05, January 2018

Type-2 Fuzzy Analysis Explains Ubiquity of Triangular and Trapezoid Membership Functions

Olga Kosheleva, Vladik Kreinovich, and Shahnaz Shahbazova

*Proceedings
of the World Conference on Soft Computing*, Baku, Azerbaijan,
May 29-31, 2018.

In principle, we can have many different membership functions. Interestingly, however, in many practical applications, triangular and trapezoidal membership functions are the most efficient ones. In this paper, we use fuzzy approach to explain this empirical phenomenon.

Technical Report UTEP-CS-18-04, January 2018

How Many Monte-Carlo Simulations Are Needed to Adequately Process Interval Uncertainty: An Explanation of the Smart Electric Grid-Related Simulation Results

Afshin Gholamy and Vladik Kreinovich

Published in *Journal of Innovative Technology and Education*,
2018, Vol. 5, No. 1, pp. 1-5.

One of the possible ways of dealing with interval uncertainty is to use Monte-Carlo simulations. A recent study of using this technique for the analysis of different smart electric grid-related algorithms shows that we need approximately 500 simulations to compute the corresponding interval range with 5% accuracy. In this paper, we provide a theoretical explanation for these empirical results.

Technical Report UTEP-CS-18-03, January 2018

Updated version UTEP-CS-18-03a, April 2018

Measures of Specificity Used in the Principle of Justifiable Granularity: A Theoretical Explanation of Empirically Optimal Selections

Olga Kosheleva and Vladik Kreinovich

Published in *Proceedings of the IEEE International Conference on
Fuzzy Systems FUZZ-IEEE'2018*, Rio de Janeiro, July 8-13, 2018,
pp. 688-694.

To process huge amounts of data, one possibility is to combine
some data points into granules, and then process the resulting
granules. For each group of data points, if we try to include all
data points into a granule, the resulting granule often becomes
too wide and thus rather useless; on the other case, if the
granule is too narrow, it includes only a few of the corresponding
point -- and is, thus, also rather useless. The need for the
trade-off between coverage and specificity is formalized as the
*principle of justified granularity*. The specific form of
this principle depends on the selection of a measure of
specificity. Empirical analysis has show that exponential and
power law measures of specificity are the most adequate. In this
paper, we show that natural symmetries explain this empirically
observed efficiency.

Original file UTEP-CS-18-03 in pdf

Updated version UTEP-CS-18-03a in pdf

Technical Report UTEP-CS-18-02, January 2018

Revised version UTEP-CS_18-02a, April 2018

How to Efficiently Compute Ranges Over a Difference Between Boxes, With Applications to Underwater Localization

Luc Jaulin, Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in *Journal of Uncertain Systems*, 2018, Vol. 12,
No. 3, pp. 190-199.

When using underwater autonomous vehicles, it is important to
localize them. Underwater localization is very approximate. As a
result, instead of a single location x, we get a set X of
possible locations of a vehicle. Based on this set of possible
locations, we need to find the range of possible values of the
corresponding objective function f(x). For missions on the ocean
floor, it is beneficial to take into account that the vehicle is
in the water, i.e., that the location of this vehicle is *not*
in a set X' describing the under-floor matter. Thus, the actual
set of possible locations of a vehicle is a difference set X−X'.
So, it is important to find the ranges of different functions over
such difference sets. In this paper, we propose an effective
algorithm for solving this problem.

Original file UTEP-CS-18-02 in pdf

Revised version UTEP-CS-18-02a in pdf

Technical Report UTEP-CS-18-01, January 2018

Updated version UTEP-CS-18-01a, April 2018

How to Detect Crisp Sets Based on Subsethood Ordering of Normalized Fuzzy Sets? How to Detect Type-1 Sets Based on Subsethood Ordering of Normalized Interval-Valued Fuzzy Sets?

Christian Servin, Olga Kosheleva, and Vladik Kreinovich

Published in *Proceedings of the IEEE International Conference on
Fuzzy Systems FUZZ-IEEE'2018*, Rio de Janeiro, July 8-13, 2018,
pp. 678-687.

If all we know about normalized fuzzy sets is which set is a subset of which, will we be able to detect crisp sets? It is known that we can do it if we allow all possible fuzzy sets, including non-normalized ones. In this paper, we show that a similar detection is possible if we only allow normalized fuzzy sets. We also show that we can detect type-1 fuzzy sets based on the subsethood ordering of normalized interval-valued fuzzy sets.

Original file UTEP-CS-18-01 in pdf

Updated version UTEP-CS-18-01a in pdf