University of Texas at El Paso
Computer Science Department
Abstracts of 2013 Reports


Technical Report UTEP-CS-13-78, posted December 2013
Fuzziness and Bayesian Analysis in Engineering
Matthias Stein, Michael Beer, and Vladik Kreinovich

Published in Proceedings of the 59th World Statistics Congress, Hong Kong, China, August 25-30, 2013.

An engineering analysis requires a realistic quantification of all input information. The amount and quality of the available information dictates the uncertainty model and its associated quantification concept. For inconsistent information, a distinction between probabilistic and non-probabilistic characteristics is beneficial. In this distinction, uncertainty refers to probabilistic characteristics and non-probabilistic characteristics are summarized as imprecision. When uncertainty and imprecision occur simultaneously, the uncertainty model fuzzy randomness appears useful. In a Bayesian approach the fuzzy probabilistic model provides the opportunity to take account of imprecision in data and in prior expert knowledge. The Bayesian approach ex-tended to inconsistent information is demonstrated by means of an example.

File in pdf


Technical Report UTEP-CS-13-77, December 2013
Updated version UTEP-CS-13-77a, March 2014
"And"- and "or"-operations for "double", "triple", etc. fuzzy sets
Hung T. Nguyen, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2014, Beijing, China, July 6-11, 2014.

In the traditional fuzzy logic, the expert's degree of confidence d(A & B) in a complex statement A & B (or A\/B) is uniquely determined by his/her degrees of confidence d(A) and d(B) in the statements A and B, as f&(d(A),d(B)) for an appropriate "and"-operation (t-norm). In practice, for the same degrees d(A) and d(B), we may have different degrees d(A & B) depending on the relation between A and B. The best way to take this relation into account is to explicitly elicit the corresponding degrees d(A & B) and d(A\/B), i.e., to come up with "double" fuzzy sets. If we only elicit information about pairs of statements, then we still need to estimate, e.g., the degree d(A & B & C) based on the known values d(A), d(B), d(C), d(A & B), d(A & C), and d(B & C). In this paper, we explain how to produce such "and"-operations for "double" fuzzy sets -- and how to produce similar "or"-operations.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-76, December 2013
Revised version UTEP-CS-13-76b, January 2014
How to Fully Represent Expert Information about Imprecise Properties in a Computer System -- Random Sets, Fuzzy Sets, and Beyond: An Overview
Hung T. Nguyen and Vladik Kreinovich

Published in International Journal of General Systems, 2014, Vol. 43, Nos. 5-6, pp. 586-609.

To help computers make better decisions, it is desirable to describe all our knowledge in computer-understandable terms. This is easy for knowledge described in terms on numerical values: we simply store the corresponding numbers in the computer. This is also easy for knowledge about precise (well-defined) properties which are either true or false for each object: we simply store the corresponding "true" and "false" values in the computer. The challenge is how to store information about imprecise properties. In this paper, we overview different ways to fully store the expert information about imprecise properties. We show that in the simplest case, when the only source of imprecision is disagreement between different experts, a natural way to store all the expert information is to use random sets; we also show how fuzzy sets naturally appear in such random-set representation. We then show how the random-set representation can be extended to the general ("fuzzy") case when, in addition to disagreements, experts are also unsure whether some objects satisfy certain properties or not.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-75, December 2013
Towards Designing Optimal Individualized Placement Tests
Octavio Lerma, Olga Kosheleva, Shahnaz Shahbazova, and Vladik Kreinovich

To find the current level of a student's knowledge, we use a sequence of problems of increasing complexity; if a student can solve a problem, the system generates a more complex one; if a student cannot solve a problem, the system generates an easier one. To find a proper testing scheme, we must take into account that every time a student cannot solve a problem, he/she gets discouraged. To take this into account, we define an overall effect on a student by combining "positive" and "negative" problems with different weights, and we design a testing scheme which minimizes this effect.

File in pdf


Technical Report UTEP-CS-13-74, December 2013
Updated version UTEP-CS-13-74a, March 2014
Feasible algorithms for lattice and directed subspaces
Jennifer Del Valle, Vladik Kreinovich, and Piotr J. Wojciechowski

Published in Mathematical Proceedings of the Royal Irish Academy, 2014, Vol. 112A, No. 2, pp. 199-204.

In some practical situations (e.g., in econometrics), it is important to check whether a given linear subspace of a space Rm with component-wise order is a lattice -- and if it is not, whether it is at least a directed ordered space. Because of the practical importance, it is desirable to have feasible algorithms for solving these problems -- which in Computer Science is usually interpreted as algorithms whose computation time does not exceed a polynomial of the length of the input. No such algorithms were previously known. In this paper, we present feasible algorithms for solving both problems.

Original file UTEP-CS-13-74 in pdf
Updated version UTEP-CS-13-74a in pdf


Technical Report UTEP-CS-13-73, December 2013
Dialect or a New Language: A Possible Explanation of the 70% Mutual Intelligibility Threshold
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2014, Vol. 9, No. 4, pp. 189-192.

In most cases, linguists have a consensus on when people from different regions speak two different dialects of the same language (and can, thus, understand each other reasonably well) or two different languages (in this case, their mutual intelligibility is limited). In most cases, this intuitive consensus corresponds to a 70% mutual intelligibility threshold: if at least 70% of the words from one region are understandable to people from another region, then these are two dialects, otherwise these are two different languages. In this paper, we provide a possible explanation for this 70% threshold.

File in pdf


Technical Report UTEP-CS-13-72, December 2013
Finding the Best Function: A Way to Explain Calculus of Variations to Engineering and Science Students
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2013, Vol. 7, No. 144, pp. 7187-7192.

In many practical problems, we need to find the most appropriate function: e.g., we need to find a control strategy u(t) that leads to the best performance of a system, we need to find the shape of the car which leads to the smallest energy losses, etc. Optimization over an unknown function can be described by the known Euler-Lagrange equations. The traditional way of deriving Euler-Lagrange equations when explaining them to the engineering and science students is, however, somewhat over-complicated. We provide a new, simpler way to deriving these equations, a way in which we directly use the fact that when the optimum is attained, all partial derivatives are equal to 0.

File in pdf


Technical Report UTEP-CS-13-71, December 2013
Why 20? Why 40? A Possible Explanation of a Special Role of 20 and 40 in Traditional Number Systems
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2013, Vol. 7, No. 144, pp. 7179-7186.

Both historical and linguistic evidence shows that numbers 20 and 40 played a special role in many traditional numerical systems. The fact that, e.g., the same number 20 appears in unrelated cultures such as Romans and Mayans is an indication that this number must have a general explanation related to human experience. In this paper, we provide a possible explanation of 20 and 40 along these lines: namely, we show that these numbers can be identified as the smallest sample sizes for which we can extract statistically significant information.

File in pdf


Technical Report UTEP-CS-13-70, December 2013
Updated version UTEP-CS-13-70a, March 2014
Approximate Nature of Traditional Fuzzy Methodology Naturally Leads to Complex-Valued Fuzzy Degrees
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2014, Beijing, China, July 6-11, 2014.

In the traditional fuzzy logic, the experts' degrees of confidence in their statements is described by numbers from the interval [0,1]. These degree have a clear intuitive meaning. Somewhat surprisingly, in some applications, it turns out to be useful to also consider different numerical degrees -- e.g., complex-valued degrees. While these complex-valued degrees are helpful in solving practical problems, their intuitive meaning is not clear. In this paper, we provide a possible explanation for the success of complex-valued degrees which makes their use more intuitively understandable -- namely, we show that these degrees naturally appear due to the approximate nature of the traditional fuzzy methodology.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-69, December 2013
Why Triangular Membership Functions Work Well in F-Transform: A Theoretical Explanation
Jaime Nava and Vladik Kreinovich

In many practical applications, it is useful to represent a signal or an image by its average values on several fuzzy sets. The corresponding F-transform technique has many useful applications in signal and image processing. In principle, we can use different membership functions. Somewhat surprisingly, in many applications, the best results occur when we use triangular membership functions. In this paper, we provide a possible theoretical explanation for this empirical phenomenon.

File in pdf


Technical Report UTEP-CS-13-68, December 2013
Knowledge Geometry Is Similar to General Relativity: Both Mass and Knowledge Curve the Corresponding Spaces
Francisco Zapata and Vladik Kreinovich

Published in Mathematical Structures and Modelling, 2014, Vol. 29, pp. 31-37.

In this paper, we explain and explore the idea that knowledge is similar to mass in physics: similarly to how mass curves space-time, knowledge curves the corresponding knowledge space.

File in pdf


Technical Report UTEP-CS-13-67, November 2013
Finding Specifications of While Statements Using Patterns
Aditi Barua and Yoonsik Cheon

A formal correctness proof of code containing loops such as while statements typically uses the technique of proof-by-induction, and often the most difficult part of carrying out an inductive proof is formulating a correct induction hypothesis, a specification for a loop statement. An incorrect induction hypothesis will surely lead to a proof failure. In this paper we propose a systematic way for identifying specifications of while statements. The key idea of our approach is to categorize and document common patterns of while statements along with their specifications. This is based on our observation that similarly-structured while statements frequently have similarly-structured specifications. Thus, a catalog of code and specification patterns can be used as a good reference for finding and formulating a specification of a while statement. We explain our approach using functional program verification in which a program is viewed as a mathematical function from one program state to another, and a correctness proof is done by comparing two functions, the implemented and the specified. However, we believe our approach is applicable to other verification techniques such as Hoare logic using pre- and post-conditions.

File in pdf


Technical Report UTEP-CS-13-66, October 2013
Picture Fuzzy Sets - a new concept for computational intelligence problems
Bui Cong Cuong and Vladik Kreinovich

Published in Proceedings of the Third World Congress on Information and Communication Technologies WICT'2013, Hanoi, Vietnam, December 15-18, 2013, pp. 1-6.

Since Zadeh introduced fuzzy sets in 1965, a lot of new theories treating imprecision and uncertainty have been introduced. Some of these theories are extensions of fuzzy set theory, other try to handle imprecision and uncertainty in different way. In this paper, we introduce a new notion of picture fuzzy sets (PFS), which are directly extensions of fuzzy sets and of intuitonistic fuzzy sets (Atanassov). Then some operations on picture fuzzy sets are defined and some properties of these operations are considered. Here the basic preliminaries of PFS theory are presented.

File in pdf


Technical Report UTEP-CS-13-65, October 2013
Studying Volatility and Dependency of Chinese Outbound Tourism Demand in Singapore, Malaysia, and Thailand: A Vine Copula Approach
Jianxu Liu, Songsak Sriboonchitta, Hung T. Nguyen and Vladik Kreinovich

Published in: Van-Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Modeling Dependence in Econometrics, Springer Verlag, Berlin, Heidelberg, 2014, pp. 255-272.

This paper investigates the volatility and dependence of Chinese tourism demand for Singapore, Malaysia, and Thailand (SMT) destinations, using the vine copula based auto regression moving average-generalized autoregressive conditional heteroskedasticity (ARMA-GARCH) model. It is found that a jolt to the tourist flow can have long-standing ramifications for the SMT countries. The estimation of the vine copulas among SMT show that the Survival Gumbel, Frank, and Gaussian copulas are the best copulas for Canonical vine (C-vine) or Drawable vine (D-vine) among the possible pair-copulas. In addition, this paper illustrates the making of time-varying Frank copulas for vine copulas. Finally, there is a discussion on tourism policy planning for better managing the tourism demand for the SMT countries. We suggest tour operators and national tourism promotion authorities of SMT collaborate closely in the marketing and promotion of joint tourism products.

File in pdf


Technical Report UTEP-CS-13-64, October 2013
A Vine Copula Approach for Analyzing Financial Risk and Co-movement of the Indonesian, Philippine and Thailand Stock Markets
Songsak Sriboonchitta, Jianxu Liu, Vladik Kreinovich, and Hung T. Nguyen

Published in: Van-Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Modeling Dependence in Econometrics, Springer Verlag, Berlin, Heidelberg, 2014, pp. 241-254.

This paper aims at analyzing the financial risk and co-movement of stock markets in three countries: Indonesia, Philippine and Thailand. It consists of analyzing the conditional volatility and test the leverage effect in the stock markets of the three countries. To capture the pairwise and conditional dependence between the variables, we use the method of vine copulas. In addition, we illustrate the computations of the value at risk and the expected shortfall using Monte Carlo simulation with copula based GJR-GARCH model. The empirical evidence shows that all the leverage effects add much to the capacity for explanation of the three stock returns, and that the D-vine structure is more appropriate than the C-vine one for describing the dependence of the three stock markets. In addition, the value at risk and ES provide the evidence to confirm that the portfolio may avoid risk in significant measure.

File in pdf


Technical Report UTEP-CS-13-63, October 2013
From Urysohn's Universal Metric Space to a Universal Space-Time
A. G. Aksoy, Z. Glassman, O. Kosheleva, and V. Kreinovich

Published in Mathematical Structures and Modeling, 2013, Vol. 28, No. 2, pp. 28-34.

A known Urysohn's result shows that there exists a universal} metric space, i.e., a metric space into every other (separable) metric space can be isomorphically embedded. Moreover, this universal metric space can be selected to be ultra-homogeneous -- every isomorphism of its two finite subsets can be extended to the isomorphism of the whole space.

Starting with Einstein's theories of Special and General relativity, space-times are described by a different type of structure -- a set (of events) equipped with the proper time t(a,b) between points a and b; such spaces are known as space-times with kinematic metric, or k-space-times. In this paper, we show that Urysohn's result can be extended to k-space-times -- namely, that there exists an ultra-homogeneous universal k-space-time.

File in pdf


Technical Report UTEP-CS-13-62, October 2013
Algebraic Product is the Only t-Norm for Which Optimization Under Fuzzy Constraints is Scale-Invariant
Juan Carlos Figueroa Garcia, Martine Ceberio, and Vladik Kreinovich

Published in Proceedings of the Sixth International Workshop on Constraints Programming and Decision Making CoProd'2013, El Paso, Texas, November 1, 2013, pp. 8-11; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 51-54.

In many practical situations, we need to optimize under fuzzy constraints. There is a known Bellman-Zadeh approach for solving such problems, but the resulting solution, in general, depends on the choice of a not well-defined constant M. We show that this dependence disappears if we use an algebraic t-norm (and-operation) a * b, and we also prove that the algebraic product is the only t-norm for which the corresponding solution is independent on M.

File in pdf


Technical Report UTEP-CS-13-61, September 2013
Perception of Elite and Universal Systems of Higher Education: An Explanation of the Empirical Thresholds
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2013, Vol. 8, No. 36, pp. 1779-1783.

Systems of higher education are usually divided into elite, mass, and universal, depending on the proportion of young people who attend college. Human experts perceive a system as elite is less than 15\% of young people of the 18--21 age group attend college, and as universal if more than 40% of young people of this age group attend college. The corresponding 15% and 40% thresholds are, however, purely empirical. In this paper, we provide an explanation for these empirical thresholds -- an explanation based on the known psychological 7 plus minus 2 law.

File in pdf


Technical Report UTEP-CS-13-60, September 2013
Why Rozenzweig-style midrashic approach makes rational sense: a logical (Spinoza-like) explanation of a seemingly non-logical approach
Olga Kosheleva and Vladik Kreinovich

Published in International Mathematical Forum, 2013, Vol. 8, No. 36, pp. 1773-1777.

A 20 century German Jewish philosopher Franz Rosenzweig promoted a new approach to knowledge, an approach in which in addition to logical reasoning, coming up with stories with imagined additional details is also important. This approach is known as midrashic since it is similar to the use of similar stories -- known as midrashes -- in Judaism. While stories can make the material interesting, traditionally, such stories are not viewed as a serious part of scientific discovery. In this paper, we show that this seemingly non-logical approach can actually be explained in logical terms and thus, makes perfect rational sense.

File in pdf


Technical Report UTEP-CS-13-59, September 2013
Similarity Approach to Defining Basic Level of Concepts Explained from the Utility Viewpoint
Joe Lorkowski and Martin Trnecka

Published in Proceedings of the Sixth International Workshop on Constraints Programming and Decision Making CoProd'2013, El Paso, Texas, November 1, 2013, pp. 17-21.

In many practical situations, it is necessary to describe an image in words. From the purely logical viewpoint, to describe the same object, we can use concepts of different levels of abstraction: e.g., when the image includes a dog, we can say that it is a dog, or that it is a mammal, or that it is a German Shepherd. In such situations, humans usually select a concept which, to them, in the most natural; this concept is called the basic level concept. However, the notion of a basic level concept is difficult to describe in precise terms; as a result, computer systems for image analysis are not very good in selecting concepts of basic level. At first glance, since the question is how to describe human decisions, we should use notions from a (well-developed) decision theory -- such as the notion of utility. However, in practice, a well-founded utility-based approach to selecting basic level concepts is not as efficient as a heuristic "similarity" approach. In this paper, we explain this seeming contradiction by showing that the similarity approach can be actually explained in utility terms -- if we use a more accurate description of the utility of different alternatives.

File in pdf


Technical Report UTEP-CS-13-58, September 2013
Data Collection for the Similar Segments in Social Speech Task
Nigel G. Ward and Steven D. Werner

Information retrieval systems rely heavily on models of similarity, but for spoken dialog such models currently use mostly standard textual-content similarity. As part of the MediaEval Benchmarking Initiative, we have created a new corpus to support development of similarity models for spoken dialog. This corpus includes 26 casual dialogs among members of two semi-cohesive groups, totaling about 5 hours, with 1889 labeled regions associated into 227 sets which annotators judged to be similar enough to share a tag. This technical report brings together information about this corpus and its intended uses.

File in pdf


Technical Report UTEP-CS-13-57, September 2013
Why in Mayan Mathematics, Zero and Infinity Are the Same: A Possible Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2013, Vol. 7, No. 124, pp. 6193-6197.

In Mayan mathematics, zero is supposed to be, in some sense, equal to infinity. At first glance, while this statement may have a deep philosophical meaning, it does not seem to make much mathematical sense. In this paper, we show, that this statement may be made mathematically reasonable. Specifically, on a real line, it is often useful to consider both −∞ and +∞ as a single infinity. When we deal with very small and very large numbers, it makes sense to use floating point representation, i.e., in effect, consider logarithms of the original values. In terms of logarithms, the original value 0 corresponds to −∞, while the original infinite value corresponds to +∞. When we treat both possible values −∞ and +∞ as a single infinity, we thus treat the original values 0 and infinity as similar.

File in pdf


Technical Report UTEP-CS-13-56, August 2013
Updated version UTEP-CS-13-56a, August 2013
How to Distinguish True Dependence from Varying Independence?
Marketa Krmelova, Martin Trnecka, Vladik Kreinovich, and Berlin Wu

Published in Journal of Intelligent Technologies and Applied Statistics, 2013, Vol. 6, No. 4, pp. 339-351.

A usual statistical criterion for the quantities X and Y to be independent is that the corresponding distribution function F(x,y) is equal to the product of the corresponding marginal distribution functions. If this equality is violated, this is usually taken to mean that X and Y are dependent. In practice, however, the inequality may be caused by the fact that we have a mixture of several populations, in each of which X and Y are independent. In this paper, we show how we can distinguish true dependence from such varying independence. This can also lead to new measures to degree of independence and of varying independence.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-55, August 2013
Complete Description of Idempotent Hedges in Fuzzy Logic
Jaime Nava

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 190-192.

In describing expert knowledge, it is often important to properly take into account hedges} like "very", "somewhat", etc. In particular, fuzzy logic provides a consistent way of describing hedges. For some of the hedges, a repetition changes the meaning: e.g., "very very small" is smaller than "very small". However, other hedges -- like "somewhat" -- are idempotent, in the sense that repeating this hedge twice does not change the meaning. In this paper, we provide a complete description of such idempotent hedges.

File in pdf


Technical Report UTEP-CS-13-54, August 2013
Temporarily withdrawn


Technical Report UTEP-CS-13-53, August 2013
Conservation of Energy Implies Conservation of Momentum: How We Can Explain Conservation of Momentum to Before-Calculus Students
Eric Freudenthal, Eric Hagedorn, and Olga Kosheleva

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 169-172.

In solving physics problems, it is often important to use the laws of conservation of energy and momentum. While most people have intuitive understanding of energy and of its conservation, there is usually no intuition behind momentum, and known textbook derivations of conservation of momentum use calculus -- which is usually taught after momentum. In this paper, we show how the law of conservation of momentum can be explained to before-calculus student: by using the fact that this law can be derived from the more intuitive conservation of energy if we consider energy in different coordinate systems.

File in pdf


Technical Report UTEP-CS-13-52, August 2013
On Early Stages of Idea Propagation, the Number of Adopters Grows as n(t) ~ c * ta: Theoretical Explanation of the Empirical Observation
L. Octavio Lerma, Deana Pennington, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 180-185.

New good ideas sometimes propagate too slowly. To speed up their propagation, we need to have a quantitative understanding of how ideas propagate. An intuitive understanding of ideas propagation has led to several reasonable first-approximation mathematical models. These models provide a good description of idea propagation on the later stages, when the ideas have already been adopted by a reasonably large number of people. However, at the critically important early stages, these models are not perfect: these models predict a linear growth with time, while empirical growth data is often better described by a power law. In this paper, we provide an intuitive theoretical explanation of the observed power-law growth.

File in pdf


Technical Report UTEP-CS-13-51, August 2013
Vine Copulas as a Way to Describe and Analyze Multi-Variate Dependence in Econometrics: Computational Motivation and Comparison with Bayesian Networks and Fuzzy Approaches
Songsak Sriboonchitta, Jianxu Liu, Vladik Kreinovich, and Hung T. Nguyen

Published in: Van-Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Modeling Dependence in Econometrics, Springer Verlag, Berlin, Heidelberg, 2014, pp. 169-187.

In the last decade, vine copulas emerged as a new efficient techniques for describing and analyzing multi-variate dependence in econometrics. Our experience has shown, however, that while these techniques have been successfully applied to many practical problems of econometrics, there is still a lot of confusion and misunderstanding related to vine copulas. In this paper, we provide a motivation for this new technique from the computational viewpoint. We show that other techniques used to described dependence -- Bayesian networks and fuzzy techniques -- can be viewed as a particular case of vine copulas.

File in pdf


Technical Report UTEP-CS-13-50, August 2013
Why Trapezoidal and Triangular Membership Functions Work So Well: Towards a Theoretical Explanation
Aditi Barua, Lalitha Snigdha Mudunuri, and Olga Kosheleva

Published in Journal of Uncertain Systems, 2014, Vol. 8, pp. 164-168.

In fuzzy logic, an imprecise ("fuzzy") property is described by its membership function μ(x), i.e., by a function which describes, for each real number x, to what degree this real number satisfies the desired property. In principle, membership functions can be of different shape, but in practice, trapezoidal and triangular membership functions are most frequently used. In this paper, we provide an interval-based theoretical explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-13-49, August 2013
How to Faster Test a Device for Different Combinations of Parameters
Francisco Zapata, Luis Gutierrez, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 233-238.

A device has to function properly under all possible conditions: e.g., for all temperatures within a given range, for all possible humidity values within a given range, etc. Ideally, it would be nice to be able to test a device for all possible combinations of these parameters, but the number of such combinations is often so huge that such an exhaustive testing is not possible. Instead, it is reasonable to check the device for all possible values of each parameter, for each possible pairs of values of two parameters, and, in general, for all possible combinations of values of k parameters for some k. For n parameters, a straightforward testing design with this property contains O(nk) * Nk experiments, where N is the number of tested values of each parameter. We show that, by using a more sophisticated testing design, we can decrease the number of experiments to a much smaller number (log(n))k-1 * Nk.

File in pdf


Technical Report UTEP-CS-13-48, August 2013
Note on Fair Price under Interval Uncertainty
Joshua McKee, Joe Lorkowski, and Thavatchai Ngamsantivong

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 186-189.

Often, in decision making situations, we do not know the exact value of a gain resulting from making each decision, we only know the bounds on this gain. To make a reasonable decision under such interval uncertainty, it makes sense to estimate the fair price of each alternative, and then to select the alternative with the highest price. In this paper, we show that the value of the fair price can be uniquely determined from some reasonable requirements: e.g., the additivity requirement, that the fair price of two objects together should be equal to the sum of the fair prices of both objects.

File in pdf


Technical Report UTEP-CS-13-47, July 2013
Minimization of Average Sensitivity as a Method of Selecting Fuzzy Functions and Operations: Successes and Limitations
Riya George, Suresh Subramanian, Alejandro Vega, and Olga Kosheleva

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 173-179.

Fuzzy logic is an extension of the standard 2-valued logic -- with two possible truth values 0 ("false") and ("true") -- to values (degrees of certainty) represented by arbitrary numbers from the interval [0,1]. One of the main challenges in fuzzy logic is that we need to extend the usual logical operations from the set {0,1} to the entire interval, and there are many possible extensions. One promising technique for selecting a reasonable extension is to take into account that the fuzzy degrees of certainty are themselves only known with uncertainty; so, it makes sense to select an operation which is, on average, the least sensitive to the corresponding uncertainty. This technique has successfully worked in selecting unary and binary operations and in selecting membership functions. In this paper, we show, however, that this minimization technique does not work well for selecting ternary operations, and that in the discrete case, the results of applying this technique are somewhat counterintuitive.

File in pdf


Technical Report UTEP-CS-13-46, July 2013
Towards a Localized Version of Pearson's Correlation Coefficient
Vladik Kreinovich, Hung T. Nguyen, and Berlin Wu

Published in Journal of Intelligent Technologies and Applied Statistics, 2013, Vol. 6, No. 3, pp. 215-224.

Pearson's correlation coefficient is used to describe dependence between random variables X and Y. In some practical situations, however, we have strong correlation for some values X and/or Y and no correlation for other values of X and Y. To describe such a local dependence, we come up with a natural localized version of Pearson's correlation coefficient. We also study the properties of the newly defined localized coefficient.

File in pdf


Technical Report UTEP-CS-13-45, July 2013
Images are Easier to Restore than 1-D Signals: A Theoretical Explanation of a Surprising Empirical Phenomenon
Christian Servin and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 211-215.

Similar techniques are often used to restore 1-D signals and 2-D images from distorted ("blurred") observations. From the purely mathematical viewpoint, 1-D signals are simpler, so it should be easier to restore signals than images. However, in practice, it is often easier to restore a 2-D image than to restore a 1-D signal. In this paper, we provide a theoretical explanation for this surprising empirical phenomenon.

File in pdf


Technical Report UTEP-CS-13-44, July 2013
√(x2 + μ) is the Most Computationally Efficient Smooth Approximation to |x|: a Proof
Carlos Ramirez, Reinaldo Sanchez, Vladik Kreinovich, and Miguel Argaez

Published in Journal of Uncertain Systems, 2014, Vol. 8, No. 3, pp. 205-210.

In many practical situations, we need to minimize an expression of the type |c1| + ... + |cn|. The problem is that most efficient optimization techniques use the derivative of the objective function, but the function |x| is not differentiable at 0. To make optimization efficient, it is therefore reasonable to approximate |x| by a smooth function. We show that in some reasonable sense, the most computationally efficient smooth approximation to |x| is the function √(x2 + μ), a function which has indeed been successfully used in such optimization.

File in pdf


Technical Report UTEP-CS-13-43, July 2013
Updated version UTEP-CS-13-43a, August 2013
Computing Covariance and Correlation in Optimally Privacy-Protected Statistical Databases: Feasible Algorithms
Joshua Day, Ali Jalal-Kamali, and Vladik Kreinovich

Published in Mo Jashidi, Vladik Kreinovich, and Janusz Kacprzyk (eds.), Proceedings of 3rd World Conference on Soft Computing, San Antonio, December 15-18, 2013, pp. 373-382.

In many real-life situations, e.g., in medicine, it is necessary to process data while preserving the patients' confidentiality. One of the most efficient methods of preserving privacy is to replace the exact values with intervals that contain these values. For example, instead of an exact age, a privacy-protected database only contains the information that the age is, e.g., between 10 and 20, or between 20 and 30, etc. Based on this data, it is important to compute correlation and covariance between different quantities. For privacy-protected data, different values from the intervals lead, in general, to different estimates for the desired statistical characteristic. Our objective is then to compute the range of possible values of these estimates.

Algorithms for effectively computing such ranges have been developed for situations when intervals come from the original surveys, e.g., when a person fills in whether his or her age is between 10 or 20, between 20 and 30, etc. These intervals, however, do not always lead to an optimal privacy protection; it turns out that more complex, computer-generated "intervalization" can lead to better privacy under the same accuracy, or, alternatively, to more accurate estimates of statistical characteristics under the same privacy constraints. In this paper, we extend the existing efficient algorithms for computing covariance and correlation based on privacy-protected data to this more general case of interval data.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-42, July 2013
Is Langrangian Formalism Adequately Describing Energy Conservation?
Vladik Kreinovich and Olga Kosheleva

Published in Mathematical Structures and Modeling, 2013, Vol. 28, No. 2, pp. 21-27.

In most physical theories, total energy is conserved. For example, when the kinetic energy of a particle decreases, the potential energy increase accordingly. For some physical systems, energy is not conserved. For example, if we consider a particle moving with friction, the energy of the particle itself is not conserved: it is transformed into thermal energy of the surrounding medium. For simple systems, energy is easy to define. For more complex physical systems, such a definition is not easy. To describe energy of generic systems, physicists came up with a general notion of energy based on the Lagrangian formalism -- a minimal-action representation of physical theories which is now ubiquitous. For many physical theories, this notion leads to physically meaningful definitions of energy. In this paper, we show that there are also examples when the Lagrangian-motivated notion of energy is not physically meaningful at all -- e.g., according to this definition, all dynamical systems are energy-conserving.

File in pdf


Technical Report UTEP-CS-13-41, July 2013
Lexical and Prosodic Indicators of Importance in Spoken Dialog
Nigel G. Ward and Karen A. Richart-Ruiz

This technical report complements the paper, Patterns of Importance Variation in Spoken Dialog (Ward and Richart-Ruiz, 2013), providing additional evidence for the claims, additional findings, and more analysis. In particular, we report more on inter-annotator disagreement, on words that correlate with importance, on prosodic features and patterns that correlate with importance, and on how our predictive model of importance might be improved.

File in pdf


Technical Report UTEP-CS-13-40, July 2013
How to Gauge Accuracy of Measurements and of Expert Estimates: Beyond Normal Distributions
Christian Servin, Aline Jaimes, Craig Tweedie, Aaron Velasco, Omar Ochoa, and Vladik Kreinovich

Published in Mo Jamshidi, Vladik Kreinovich, and Janusz Kacprzyk (eds.), Proceedings of 3rd World Conference on Soft Computing, San Antonio, December 15-18, 2013, pp. 339--346.

To properly process data, we need to know the accuracy of different data points, i.e., accuracy of different measurement results and expert estimates. Often, this accuracy is not given. For such situations, we describe how this accuracy can be estimated based on the available data.

File in pdf


Technical Report UTEP-CS-13-39, July 2013
Stochastic Causality Is Inconsistent with the Lorentz Group
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2013, Vol. 28, No. 2, pp. 15-20.

According to modern physics, all physical processes are described by quantum theory. In particular, due to quantum fluctuations, even in the empty space, the causal relation is, in general, slightly different from the usual Minkowski one. Since quantum effects are probabilistic, to properly represent the corresponding stochastic causality, we need to describe, for every two events e and e', the probability p(e,e') that e can causally influence e'. Surprisingly, it turns out that such a probability functions cannot be Lorentz-invariant. In other words, once we take into account quantum effects in causality, Lorentz-invariance is violated -- similarly to the fact that it is violated if we take into account the presence of matter and start considering cosmological solutions.

File in pdf


Technical Report UTEP-CS-13-38, July 2013
Updated version UTEP-CS-13-38a, August 2013
Fuzzy Sets Can Be Interpreted as Limits of Crisp Sets, and This Can Help to Fuzzify Crisp Notions
Olga Kosheleva, Vladik Kreinovich, and Thavatchai Ngamsantivong

Published in Mo Jamshidi, Vladik Kreinovich, and Janusz Kacprzyk (eds.), in Proceedings of 3rd World Conference on Soft Computing, San Antonio, December 15-18, 2013, pp. 327-337.

Fuzzy sets have been originally introduced as generalizations of crisp sets, and this is how they are usually considered. From the mathematical viewpoint, the problem with this approach is that most notions allow many different generalizations, so every time we try to generalize some notions to fuzzy sets, we have numerous alternatives. In this paper, we show that fuzzy sets can be alternatively viewed as limits of crisp sets. As a result, for some notions, we can come up with a unique generalization -- as the limit of the results of applying this notion to the corresponding crisp sets.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-37, July 2013
Updated version UTEP-CS-13-37a, June 2014
Computing the United Solution to an Interval Linear System Is NP-Hard -- Even When All Coefficients Are Known With the Same Accuracy
Ralph Kelsey and Vladik Kreinovich

When the coefficients of a linear system are known with interval uncertainty, instead of a single solution, we have the whole set of possible solutions -- known as the united set. It is known that in general, computing this united set is NP-hard. There exist several proofs of this NP-hardness; all known proofs use examples with intervals of different width -- corresponding to different accuracy in measuring different coefficients. We show that the problem remains NP-hard even if we limit ourselves to situations when all the coefficients are known with the same accuracy.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-36, June 2013
Revised version UTEP-CS-13-36a, July 2013
How to Detect Linear Dependence on the Copula Level?
Vladik Kreinovich, Hung T. Nguyen, and Songsak Sriboonchitta

Published in: Van-Nam Huynh, Vladik Kreinovich, and Songsak Sriboonchitta (eds.), Modeling Dependence in Econometrics, Springer Verlag, Berlin, Heidelberg, 2014, pp. 65-82.

In many practical situations, the dependence between the quantities is linear or approximately linear. Knowing that the dependence is linear simplifies computations; so, is is desirable to detect linear dependencies. If we know the joint probability distribution, we can detect linear dependence by computing Pearson's correlation coefficient. In practice, we often have a copula instead of a full distribution; in this case, we face a problem of detecting linear dependence based on the copula. Also, distributions are often heavy-tailed, with infinite variances, in which case Pearson's formulas cannot be applied. In this paper, we show how to modify Pearson's formula so that it can be applied to copulas and to heavy-tailed distributions.

Original file in pdf
Revised version in pdf


Technical Report UTEP-CS-13-35, June 2013
Beyond Traditional Chemical Kinetics Formulas: Group-Theoretic Approach
Vladik Kreinovich

According to the traditional formulas of chemical kinetics, the rate is proportional to the product of concentrations of reagents. This formula leads to a reasonable description of interactions both in chemistry and in other disciplines (e.g., in ecology). However, in many cases, these formulas are only approximate. Several semi-empirical formulas have been designed to more accurately describe the interaction rate. The problem is that most of these formulas are purely empirical, they lack a convincing theoretical explanation. In this paper, we show that a group-theoretic approach -- taking into account natural symmetries of the systems -- leads to the desired theoretical explanation for these empirical formulas.

File in pdf


Technical Report UTEP-CS-13-34, June 2013
Space-Time Assumptions Behind NP-Hardness of Propositional Satisfiability
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modelling, 2014, Vol. 29, pp. 13-30.

For some problems, we know feasible algorithms for solving them. Other computational problems (such as propositional satisfiability) are known to be NP-hard, which means that, unless P=NP (which most computer scientists believe to be impossible), no feasible algorithm is possible for solving all possible instances of the corresponding problem. Most usual proofs of NP-hardness, however, use Turing machine -- a very simplified version of a computer -- as a computation model. While Turing machine has been convincingly shown to be adequate to describe what can be computed in principle, it is much less intuitive that these oversimplified machine are adequate for describing what can be computed effectively; while the corresponding adequacy results are known, they are not easy to prove and are, thus, not usually included in the textbooks. To make the NP-hardness result more intuitive and more convincing, we provide a new proof in which, instead of a Turing machine, we use a generic computational device. This proof explicitly shows the assumptions about space-time physics that underlie NP-hardness: that all velocities are bounded by the speed of light, and that the volume of a sphere grows no more than polynomially with radius. If one of these assumptions is violated, the proof no longer applies; moreover, in such space-times we can potentially solve the satisfiability problem in polynomial time.

File in pdf


Technical Report UTEP-CS-13-33, June 2013
Enhancing the Expressiveness of the CleanJava Language
Melisa Vela and Yoonsik Cheon

The CleanJava language is a formal annotation language for Java to support Cleanroom-style functional program verification that views a program as a mathematical function from one program state to another. The CleanJava notation is based on the Java expression syntax with a few extensions, and thus its vocabulary is somewhat limited to that of Java. This often makes it difficult to specify the rich semantics of a Java program in a succinct and natural way that is easy to manipulate for formal correctness reasoning. In this paper we propose to make the CleanJava language more expressive by supporting user-defined mathematical functions that are introduced solely for the purpose of writing annotations. A user-defined function is written in a notation similar to those of modern functional programming languages like SML and Haskell and has properties such as polymorphism and type inference. We also explain how the notion of functions fits in the object-oriented world of Java with concepts like inheritance and method overriding. User-defined functions not only enrich the vocabulary of CleanJava but also allow one to tune the abstraction level of annotations. One contribution of our work is bringing the notion of functions as found in modern functional programming languages to an object-oriented programming language in the context of writing annotations, thus blending the benefits of two programming paradigms.

File in pdf


Technical Report UTEP-CS-13-32, May 2013
Updated version UTEP-CS-13-32a, May 2013
Processing Quantities with Heavy-Tailed Distribution of Measurement Uncertainty: How to Estimate the Tails of the Results of Data Processing
Michal Holcapek and Vladik Kreinovich

Published in: Mo Jamshidi, Vladik Kreinovich, and Janusz Kacprzyk (eds.), Proceedings of 3rd World Conference on Soft Computing, San Antonio, December 15-18, 2013, pp. 25-32.

Measurements are never absolutely accurate; so, it is important to estimate how the measurement uncertainty affects the result of data processing. Traditionally, this problem is solved under the assumption that the probability distributions of measurement errors are normal -- or at least are concentrated, with high certainty, on a reasonably small interval. In practice, the distribution of measurement errors is sometimes heavy-tailed, when very large values have a reasonable probability. In this paper, we analyze the corresponding problem of estimating the tail of the result of data processing in such situations.

Original file in pdf
Updated file in pdf


Technical Report UTEP-CS-13-31, May 2013
Updated version UTEP-CS-13-31a, July 2013
Computing with Words: Towards a New Tuple-Based Formalization
Olga Kosheleva, Vladik Kreinovich, Ariel Garcia, Felipe Jovel, Luis Torres Escobedo, Thavatchai Ngamsantivong

Published in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics IEEE SMC'2013, Manchester, UK, October 13-16, 2013, pp. 344-349.

An expert opinion describes his or her opinion about a quantity by using imprecise ("fuzzy") words from a natural language, such as "small", "medium", "large", etc. Each of these words provides a rather crude description of the corresponding quantity. A natural way to refine this description is to assign degrees to which the observed quantity fits each of the selected words. For example, an expert can say that the value is reasonable small, but to some extent it is medium. In this refined description, we represent each quantity by a tuple of the corresponding degrees.

Once we have such a tuple-based information about several quantities x1, ..., xm, and we know that another quantity y is related to xi by a known relation y=f(x1, ..., xm), it is desirable to come up with a resulting tuple-based description of y. In this paper, we describe why a seemingly natural idea for computing such a tuple does not work, and we show how to modify this idea so that it can be used.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-30, May 2013
Updated version UTEP-CS-13-30a, July 2013
A Symmetry-Based Approach to Selecting Membership Functions and Its Relation to Chemical Kinetics
Vladik Kreinovich, Olga Kosheleva, Jorge Y. Cabrera, Mario Gutierrez, and Thavatchai Ngamsantivong

Published in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics IEEE SMC'2013, Manchester, UK, October 13-16, 2013, pp. 339-343.

In many practical situations, we encounter physical quantities like time for which there is no fixed starting point for measurements: physical properties do not change if we simply change (shift) the starting point. To describe knowledge about such properties, it is desirable to select membership functions which are similarly shift-invariant. We show that while we cannot require that each membership function is shift-invariant, we can require that the linear space of all linear combinations of given membership functions is shift-invariant. We describe all such shift-invariant families of membership functions, and we show that they are naturally related to the corresponding formulas of chemical kinetics.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-29, May 2013
CJC: An Extensible Checker for the CleanJava Annotation Language
Cesar Yeep and Yoonsik Cheon

CleanJava is a formal annotation language for the Java programming language to support a Cleanroom-style functional program verification technique that views programs as mathematical functions. It needs a suite of support tools including a checker that can parse annotations and check them for syntactic and static semantic correctness. The two key requirements of the checker are flexibility and extensibility. Since the language is still under development and refinement, it should be flexible to facilitate language experimentation and accommodate language changes. It should be also extensible to provide base code for developing more advanced support tools like an automated theorem prover. In addition, it should recognize Java syntax, as CleanJava is a superset of Java. In this paper we describe our experience of developing a CleanJava checker called CJC and explain how we met the above requirements by using an open-source Java compiler. We expect our techniques and the lessons that we learned be useful to others implementing Java-like languages.

File in pdf


Technical Report UTEP-CS-13-28, May 2013
Updated version UTEP-CS-13-28a, December 2013
Second updated version UTEP-CS-13-28b, February 2014
Using Symmetries (Beyond Geometric Symmetries) in Chemical Computations: Computing Parameters of Multiple Binding Sites
Andres Ortiz and Vladik Kreinovich

Published in Symmetry, 2014, Vol. 6, pp. 90-102.

We show how transformation group ideas can be naturally used to generate efficient algorithms for scientific computations. The general approach is illustrated on the example of determining, from the experimental data, the dissociation constants related to multiple binding sites. We also explain how the general transformation group approach is related to the standard (backpropagation) neural networks; this relation justifies the potential universal applicability of the group-related approach.

Original file in pdf
updated version in pdf
second updated version in pdf


Technical Report UTEP-CS-13-27, May 2013
Updated version UTEP-CS-13-27a, September 2013
Peak-End Rule: A Utility-Based Explanation
Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

Published in Proceedings of the Sixth International Workshop on Constraints Programming and Decision Making CoProd'2013, El Paso, Texas, November 1, 2013, pp. 12-16; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 101-106.

In many practical situations, people judge their overall experience by only taking into account the peak and the last levels of pleasantness or unpleasantness. While this peak-end rule is empirically supported by numerous psychological experiments, it seems to contradict our general theoretical ideas about people's preferences. In this paper, we show that, contrary to this impression, the end-peak rule can be justified based on the main ideas of the traditional utility-based decision theory.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-26, May 2013
Necessary and Sufficient Conditions for Generalized Uniform Fuzzy Partitions
Michal Holcapek, Irina Perfilieva, Vilem Novak, and Vladik Kreinovich

Published in Fuzzy Sets and Systems, 2015, Vol. 277, pp. 97-121.

The fundamental concept in the theory of fuzzy transform (F-transform) is that of fuzzy partition. The original definition assumes that each two fuzzy subsets overlap in such a way that sum of membership degrees in each point is equal to 1. However, this condition can be generalized to obtain a denser fuzzy partition that leads to improvement of approximation properties of F-transform. However, a problem arises how one can effectively construct such type of fuzzy partitions. We use a generating function having special properties and it is not immediately clear whether it really defines a general uniform fuzzy partition. In this paper, we provide necessary and sufficient condition using which we can solve this task so that optimal generalized uniform fuzzy partition can be designed more easily. This is important in various practical applications of the F-transform, for example in image processing, time series analysis, solving differential equations with boundary conditions, and other ones.

File in pdf


Technical Report UTEP-CS-13-25, May 2013
Updated version UTEP-CS-13-25a, September 2013
Towards a Physically Meaningful Definition of Computable Discontinuous and Multi-Valued Functions (Constraints)
Martine Ceberio, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the Sixth International Workshop on Constraints Programming and Decision Making CoProd'2013, El Paso, Texas, November 1, 2013, pp. 22-26; detailed version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making: Theory and Applications, Springer Verlag, Berlin, Heidelberg, 2018, pp. 45-50.

In computable mathematics, there are known definitions of computable numbers, computable metric spaces, computable compact sets, and computable functions. A traditional definition of a computable function, however, covers only continuous functions. In many applications (e.g., in phase transitions), physical phenomena are described by discontinuous or multi-valued functions (a.k.a. constraints). In this paper, we provide a physics-motivated definition of computable discontinuous and multi-valued functions, and we analyze properties of this definition.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-24, April 2013
An Ontological Approach to Capture Data Provenance Across Multiple Platforms
Leonardo Salayandia

The process of collecting and transforming data can extend across different platforms, both physical and digital. Capturing provenance that reflects the actions involved in such a process in a consistent manner can be difficult and involve the use of multiple tools. An approach based on formal ontologies and software engineering practices is presented to capture data provenance. The approach starts by creating ontologies about data collection and transformation processes. These ontologies, referred to as Workflow-Driven Ontologies, establish a consistent view of the process that is independent of the platform used to carry out the process. Next, software modules are generated, targeting specific types of platforms on which data processes are implemented, so that data provenance can be captured as the process is being carried out. This paper presents the software architecture of the approach and discusses the generation of software modules, leveraging the structure and terminology of Workflow Driven Ontologies to capture data provenance. The result of this approach is the creation and population of knowledge bases that capture the processes used to collect and transform data, as well as provenance about how individual datasets were produced.

File in pdf


Technical Report UTEP-CS-13-23, April 2013
Updated version UTEP-CS-12-23a, July 2013
Towards Discrete Interval, Set, and Fuzzy Computations
Enrique Portillo, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the IEEE International Conference on Systems, Man, and Cybernetics IEEE SMC'2013, Manchester, UK, October 13-16, 2013, pp. 322-327.

In many applications, we know the function f(x1,...,xn), we know the intervals [xi] of possible values of each quantity xi, and we are interested in the range of possible values of y=f(x1,...,xn); this problem is known as the problem of interval computations. In other applications, we know the function f(x1,...,xn), we know the fuzzy sets Xi that describe what we know about each quantity xi, and we are interested in finding the fuzzy set Y corresponding to the quantity y=f(x1,...,xn); this problem is known as the problem of fuzzy computations. There are many efficient algorithms for solving these problems; however, most of these algorithms implicitly assume that each quantity xi can take any real value within its range. In practice, some quantities are discrete: e.g., xi can describe the number of people. In this paper, we provide feasible algorithms for interval, set, and fuzzy computations for such discrete inputs.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-22, March 2013
Towards Fuzzy Method for Estimating Prediction Accuracy for Discrete Inputs, with Application to Predicting At-Risk Students
Xiaojing Wang, Martine Ceberio, and Angel F. Garcia Contreras

Published in Proceedings of the joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 536-539.

In many practical situations, we need, given the values of the observed quantities x1, ..., xn, to predict the value of a desired quantity y. To estimate the accuracy of a prediction algorithm f(x1, ..., xn), we need to compare the results of this algorithm's prediction with the actually observed values.

The value y usually depends not only on the values x1, ..., xn, but also on values of other quantities which we do not measure. As a result, even when we have the exact same values of the quantities x1, ..., xn, we may get somewhat different values of y. It is often reasonable to assume that for each combinations of xi values, possible values of y are normally distributed, with some mean E and standard deviation s. Ideally, we should predict both E and s, but in many practical situations, we only predict a single value Y. How can we gauge the accuracy of this prediction based on the observations?

A seemingly reasonable idea is to use crisp evaluation of prediction accuracy: a method is accurate if Y belongs to a k-sigma interval [E - k * s, E + k * s], for some pre-selected value k (e.g., 2, 3, or 6). However, in this method, the value Y = E + k * s is considered accurate, but a value E + (k + d) * s (which, for small d > 0, is practically indistinguishable from Y) is not accurate. To achieve a more adequate description of accuracy, we propose to define a degree to which the given estimate is accurate.

As a case study, we consider predicting at-risk students.

File in pdf


Technical Report UTEP-CS-13-21, March 2013
Updated version UTEP-CS-13-21a, July 2013
Final version UTEP-CS-13-21d, October 2013
How to Explain (and Overcome) 2% Barrier in Teaching Computer Science: Towards New Applications of Fuzzy Ideas
Olga Kosheleva and Vladik Kreinovich

Published in Archives for the Philosophy and History of Soft Computing, 2013, Vol. 1, No. 1.

Computer science educators observed that in the present way of teaching computing, only 2% of students can easily handle computational concepts -- and, as a result, only 2% of the students specialize in computer science. With the increasing role of computers in the modern world, and the increasing need for computer-related jobs, this 2% barrier creates a shortage of computer scientists. We notice that the current way of teaching computer science is based on easiness of using two-valued logic, on easiness of dividing all situations, with respect to each property, into three classes: yes, no, and unknown. The fact that the number of people for whom such a division is natural is approximately 2%, provides a possible explanation of the 2% barrier -- and a natural idea of how to try to overcome this barrier: to tailor our teaching to students for whom division into more than three classes is much more natural. This means, in particular, emphasizing fuzzy logic, in which for each property, we divide the objects into several classes corresponding to different degrees with which the given property is satisfied. We also analyze which are the best ways to implement the corresponding fuzzy ideas.

Original file in pdf
First updated version in pdf
Final version in pdf


Technical Report UTEP-CS-13-20, March 2013
For Describing Uncertainty, Ellipsoids Are Better than Generic Polyhedra and Probably Better than Boxes: A Remark
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2013, Vol. 27, pp. 38-41.

For a single quantity, the set of all possible values is usually an interval. An interval is easy to represent in a computer: e.g., we can store its two endpoints. For several quantities, the set of possible values may have an arbitrary shape. An exact description of this shape requires infinitely many parameters, so in a computer, we have to use a finite-parametric approximation family of sets. One of the widely used methods for selecting such a family is to pick a symmetric convex set and to use its images under all linear transformations. If we pick a unit ball, we end up with ellipsoids; if we pick a unit cube, we end up with boxes and parallelepipeds; we can also pick a polyhedron. In this paper, we show that ellipsoids lead to better approximations of actual sets than generic polyhedra; we also show that, under a reasonable conjecture, ellipsoids are better approximators than boxes.

File in pdf


Technical Report UTEP-CS-13-19, March 2013
A New Analog Optical Processing Scheme for Solving NP-Hard Problems
Michael Zakharevich and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2013, Vol. 7, No. 3, pp. 238-240.

Many real-life problems are, in general, NP-hard, i.e., informally speaking, are difficult to solve. To be more precise, a problem p is NP-hard means that every problem from the class NP can be reduced to this problem p. Thus, if we have an efficient algorithm for solving one NP-hard problem, we can use this reduction to get a more efficient way of solving all the problems from the class NP. To speed up computations, it is reasonable to base them on the fastest possible physical process -- i.e., on light. It is known that analog optical processing indeed speeds up computation of several NP-hard problems. Each of the corresponding speed-up schemes has its success cases and limitations. The more schemes we know, the higher the possibility that for a given problem, one of these schemes will prove to be effective. Motivated by this argument, we propose a new analog optical processing scheme for solving NP-hard problems.

File in pdf


Technical Report UTEP-CS-13-18, March 2013
Why l1 Is a Good Approximation to l0: A Geometric Explanation
Carlos Ramirez, Vladik Kreinovich, and Miguel Argaez

Published in Journal of Uncertain Systems, 2013, Vol. 7, No. 3, pp. 203-207.

In practice, we usually have partial information; as a result, we have several different possibilities consistent with the given measurements and the given knowledge. For example, in geosciences, several possible density distributions are consistent with the measurement results. It is reasonable to select the simplest among such distributions. A general solution can be described, e.g., as a linear combination of basic functions. A natural way to define the simplest solution is to select a one for which the number of the non-zero coefficients ci is the smallest. The corresponding "l0-optimization" problem is non-convex and therefore, difficult to solve. As a good approximation to this problem, Candes and Tao proposed to use a solution to the convex l1 optimization problem |c1| + ... + |cn| --> min. In this paper, we provide a geometric explanation of why l1 is indeed the best convex approximation to l0.

File in pdf


Technical Report UTEP-CS-13-17, March 2013
Towards Model Fusion in Geophysics: How to Estimate Accuracy of Different Models
Omar Ochoa, Aaron Velasco, and Christian Servin

Published in Journal of Uncertain Systems, 2013, Vol. 7, No. 3, pp. 190-197.

In geophysics, we usually have several Earth models based on different types of data: seismic, gravity, etc. Each of these models captures some aspects of the Earth structure. To get the more description of the Earth, it is desirable to "fuse" these models into a single one. To appropriately fuse the models, we need to know the accuracy of different models. In this paper, we show that the traditional methods cannot be directly used to estimate these accuracies, and we propose a new method for such estimation.

File in pdf


Technical Report UTEP-CS-13-16, February 2013
Comparing intervals and moments for the quantification of coarse information
Michael Beer and Vladik Kreinovich

Published in Proceedings of the Mini-Symposium "Engineering Analyses with Vague and Imprecise Information" at the 11th International Conference on Structural Safety and Reliability, New York, June 16-20, 2013.

In this paper the problem of the most appropriate modeling of scarce information for an engineering analysis is investigated. This investigation is focused on a comparison between a rough probabilistic modeling based on the first two moments and interval modeling. In many practical cases, the available information is limited to such an extent that a more thorough modeling cannot be pursued. The engineer has to make a decision regarding the modeling of this limited and coarse information so that the results of the analysis provide the most suitable basis for conclusions. We approach this problem from the angle of information theory and propose to select the most informative model (in Shannon's sense). The investigation reveals that the answer to question of model choice depends on the confidence, which is needed for the engineering results in order to make informed decisions.

File in pdf


Technical Report UTEP-CS-13-15, February 2013
Constructing Verifiably Correct Java Programs Using OCL and CleanJava
Andrzej Yoonsik Cheon and Carmen Avila

A recent trend in software development is building a precise model that can be used as a basis for the software development. Such a model may enable an automatic generation of working code, and more importantly it provides a foundation for correctness reasoning of code. In this paper we propose a practical approach for constructing a verifiably correct program from such a model. The key idea of our approach is (a) to systematically translate formally-specified design constraints such as class invariants and operation pre and postconditions to code-level annotations and (b) to use the annotations for the correctness proof of code. For this we use both the Object Constraint Language (OCL) and CleanJava. CleanJava is a formal annotation language for Java and supports a Cleanroom-style functional program verification. The combination of OCL and CleanJava makes our approach not only practical but also easily incorporated and integrated into object-oriented software development methods. We expect our approach to provide a practical alternative or complementary technique to program testing to assure the correctness of software.

File in pdf


Technical Report UTEP-CS-13-14, February 2013
Revised version UTEP-CS-13-14a, July 2013
Checking Monotonicity Is NP-Hard Even for Cubic Polynomials
Andrzej Pownuk, Luc Longpre, and Vladik Kreinovich

Published in Reliable Computing, 2013, Vol. 18, pp. 90-96.

One of the main problems of interval computations is to compute the range of a given function over given intervals. In general, this problem is computationally intractable (NP-hard) -- that is why we usually compute an enclosure and not the exact range. However, there are cases when it is possible to feasibly compute the exact range; one of these cases is when the function is monotonic with respect to each of its variables. The monotonicity assumption holds when the derivatives at a midpoint are different from 0 and the intervals are sufficiently narrow; because of this, monotonicity-based estimates are often used as a heuristic method. In situations when it is important to have an enclosure, it is desirable to check whether this estimate is justified, i.e., whether the function is indeed monotonic. It is known that monotonicity can be feasibly checked for quadratic functions. In this paper, we show that for cubic functions, checking monotonicity is NP-hard.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-13, February 2013
Updated version UTEP-13-13a, March 2013
Estimating Third Central Moment C3 for Privacy Case under Interval and Fuzzy Uncertainty
Ali Jalal-Kamali and Vladik Kreinovich

Published in Proceedings of the joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 454-459.

Some probability distributions (e.g., Gaussian) are symmetric, some (e.g., lognormal) are non-symmetric ({\em skewed}). How can we gauge the skeweness? For symmetric distributions, the third central moment C3 = E[(x - E(x))3] is equal to 0; thus, this moment is used to characterize skewness. This moment is usually estimated, based on the observed (sample) values x1, ..., xn, as C3 = (1/n) * ((x1 - E)3 + ... + (xn - E)3), where E = (1/n) * (x1 + ... + xn). In many practical situations, we do not know the exact values of xi. For example, to preserve privacy, the exact values are often replaced by intervals containing these values (so that we only know whether the age is under 10, between 10 and 20, etc). Different values from these intervals lead, in general, to different values of C3; it is desirable to find the range of all such possible values. In this paper, we propose a feasible algorithm for computing this range.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-12, February 2013
Updated version UTEP-CS-13-12a, March 2013
Data Anonymization that Leads to the Most Accurate Estimates of Statistical Characteristics: Fuzzy-Motivated Approach
G. Xiang, S. Ferson, L. Ginzburg, L. Longpre, E. Mayorga, and O. Kosheleva

Published in Proceedings of the joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 611-616.

To preserve privacy, the original data points (with exact values) are replaced by boxes containing each (inaccessible) data point. This privacy-motivated uncertainty leads to uncertainty in the statistical characteristics computed based on this data. In a previous paper, we described how to minimize this uncertainty under the assumption that we use the same standard statistical estimates for the desired characteristics. In this paper, we show that we can further decrease the resulting uncertainty if we allow fuzzy-motivated weighted estimates, and we explain how to optimally select the corresponding weights.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-11, February 2013
Updated version UTEP-CS-13-11a, March 2013
Likert-Scale Fuzzy Uncertainty from a Traditional Decision Making Viewpoint: It Incorporates Both Subjective Probabilities and Utility Information
Joe Lorkowski and Vladik Kreinovich

Published in Proceedings of the joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 525-530.

One of the main methods for eliciting the values of the membership function μ(x) is to use the Likert scales, i.e., to ask the user to mark his or her degree of certainty by an appropriate mark k on a scale from 0 to n and take μ(x)=k/n. In this paper, we show how to describe this process in terms of the traditional decision making. Our conclusion is that the resulting membership degrees incorporate both probability and utility information. It is therefore not surprising that fuzzy techniques often work better than probabilistic techniques -- which only take into account the probability of different outcomes.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-10, February 2013
Updated version UTEP-CS-13-10a. March 2013
How to Generate Worst-Case Scenarios When Testing Already Deployed Systems Against Unexpected Situations
Francisco Zapata, Ricardo Pineda, and Martine Ceberio

Published in Proceedings of the Joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 617-622.

Before a complex system is deployed, it is tested -- but it is tested against known operational mission, under several known operational scenarios. Once the system is deployed, new possible unexpected and/or uncertain operational scenarios emerge. It is desirable to develop methodologies to test the system against such scenarios. A possible methodology to test the system would be to generate the worst case scenario that we can think of -- to understand, in principle, the behavior of the system. So, we face a question of generating such worst-case scenarios. In this paper, we provide some guidance on how to generate such worst-case scenarios.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-09, February 2013
Updated version UTEP-CS-13-09a, April 2013
Why Complex-Valued Fuzzy? Why Complex Values in General? A Computational Explanation
Olga Kosheleva, Vladik Kreinovich, and Thavatchai Ngamsantivong

Published in Proceedings of the Joint World Congress of the International Fuzzy Systems Association and Annual Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS'2013, Edmonton, Canada, June 24-28, 2013, pp. 1233-1236.

In the traditional fuzzy logic, as truth values, we take all real numbers from the interval [0,1]. In some situations, this set is not fully adequate for describing expert uncertainty, so a more general set is needed. From the mathematical viewpoint, a natural extension of real numbers is the set of complex numbers. Complex-valued fuzzy sets have indeed been successfully used in applications of fuzzy techniques. This practical success leaves us with a puzzling question: why complex-valued degree of belief, degrees which do not seem to have a direct intuitive meaning, have been so successful? In this paper, we use latest results from theory of computation to explain this puzzle. Namely, we show that the possibility to extend to complex numbers is a necessary condition for fuzzy-related computations to be feasible. This computational result also explains why complex numbers are so efficiently used beyond fuzzy, in physics, in control, etc.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-08, February 2013
Updated version UTEP-CS-13-08b, April 2013
Full Superposition Principle Is Inconsistent with Non-Deterministic Versions of Quantum Physics
Andres Ortiz and Vladik Kreinovich

Published in Cybernetics and Physics, 2013, Vol. 2, No. 1, pp. 37-40.

Many practical systems are non-deterministic, in the sense that available information about the initial states and control values does not uniquely determine the future states. For some such systems, it is important to take quantum effects into account. For that, we need to develop non-deterministic versions of quantum physics. In this paper, we show that for non-deterministic versions of quantum physics, we cannot require superposition principle -- one of the main fundamental principles of modern quantum mechanics. Specifically, while we can consider superpositions of states corresponding to the same version of the future dynamics, it is not consistently possible to consider superpositions of states corresponding to different versions of the future.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-13-07, February 2013
Use of Grothendieck Inequality in Interval Computations: Quadratic Terms are Estimated Accurately Modulo a Constant Factor
Olga Kosheleva and Vladik Kreinovich

One of the main problems of interval computations is to compute the range of a given function f over given intervals. For a linear function, we can feasibly estimate its range, but for quadratic (and for more complex) functions, the problem of computing the exact range is NP-hard. So, if we limit ourselves to feasible algorithms, we have to compute enclosures instead of the actual ranges. It is known that asymptotically the smallest possible excess width of these enclosures is O(Δ2), where Δ is the largest half-width of the input intervals. This asymptotics is attained for the Mean Value method, one of the most widely used methods for estimating the range.

The excess width is caused by quadratic (and higher order) terms in the function f. It is therefore desirable to come up with an estimation method for which the excess width decreases when the maximum of this quadratic term decreases. In the Mean Value method, while the excess width is bounded by O(Δ2), we cannot guarantee that the excess width decreases with the size of the quadratic term. In this paper, we show that, by using Grothendieck inequality, we can create a modification of the Mean Value method in which the quadratic term is estimated accurately modulo a small multiplicative constant -- i.e., in which the excess width is guaranteed to be bounded by 3.6 times the size of the quadratic term.

File in pdf


Technical Report UTEP-CS-13-06, February 2013
Updated version UTEP-CS-11-06a, February 2014
Filtering out high frequencies in time series using F-transform
Vilem Novak, Irina Perfilieva, Michal Holcapek, and Vladik Kreinovich

Published in Information Sciences, 2014, Vol. 274, pp. 192-209.

In this paper, we will focus on the application of fuzzy transform (F-transform) in the analysis of time series. We assume that the time series can be decomposed into three constituent components: the trend-cycle, seasonal component and random noise. We will demonstrate that by using F-transform, we can approximate the trend-cycle of a given time series with high accuracy.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-05, February 2013
Updated version UTEP-CS-13-05a, May 2013
F-transform in View of Aggregation Functions
Irina Perfilieva and Vladik Kreinovich

Published in Proceedings of the 7th International Summer School on Aggregation Operations AGOP'2013, Pamplona, Spain, July 16-20, 2013.

A relationship between the discrete F-transform and aggregation functions is analyzed. We show that the discrete F-transform (direct or inverse) can be associated with a set of linear aggregation functions that respect a fuzzy partition of a universe. On the other side, we discover conditions that should be added to a set of linear aggregation functions in order to obtain the discrete F-transform. Last but not least, the relationship between two analyzed notions is based on a new (generalized) definition of a fuzzy partition without the Ruspini condition.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-04, February 2013
Brans-Dicke Scalar-Tensor Theory of Gravitation May Explain Time Asymmetry of Physical Processes
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2013, Vol. 27, pp. 28-37.

Most fundamental physical equations remain valid if we reverse the time order. Thus, if we start with a physical process (which satisfies these equations) and reverse time order, the resulting process also satisfies all the equations and thus, should also be physically reasonable. In practice, however, many physical processes are not reversible: e.g., a cup can break into pieces, but the pieces cannot magically get together and become a whole cup. In this paper, we show that the Brans-Dicke Scalar-Tensor Theory of Gravitation, one of the most widely used generalizations of Einstein's General relativity, is, in effect, time-asymmetric. This time-asymmetry may explain the observed time asymmetry of physical phenomena.

File in pdf


Technical Report UTEP-CS-13-03, January 2013
Updated version UTEP-CS-13-03a, April 2013
Why Inverse F-transform? A Compression-Based Explanation
Vladik Kreinovich, Irina Perflieva, and Vilem Novak

Published in Proceedings of the 2013 International Conference on Fuzzy Systems FUZZ-IEEE'2013, Hyderabad, India, July 7-10, 2013, pp. 1378-1384.

In many practical situations, e.g., in signal processing, image processing, analysis of temporal data, it is very useful to use fuzzy (F-) transforms. In an F-transform, we first replace a function x(t) by a few local averages (this is called forward F-transform), and then reconstruct the original function from these averages (this is called inverse F-transform). While the formula for the forward F-transform makes perfect intuitive sense, the formula for the inverse F-transform seems, at first glance, somewhat counter-intuitive. On the other hand, its empirical success shows that this formula must have a good justification. In this paper, we provide such a justification -- a justification which is based on formulating a reasonable compression-based criterion.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-02, January 2013
Updated version UTEP-CS-13-02a, April 2013
Relation Between Polling and Likert-Scale Approaches to Eliciting Membership Degrees Clarified by Quantum Computing
Renata Hax Sander Reiser, Adriano Maron, Lidiana Visintin, Ana Maria Abeijon, and Vladik Kreinovich

Published in Proceedings of the 2013 International Conference on Fuzzy Systems FUZZ-IEEE'2013, Hyderabad, India, July 7-10, 2013, pp. 1222-1227.

In fuzzy logic, there are two main approaches to eliciting membership degrees: an approach based on polling experts, and a Likert-scale approach, in which we ask experts to indicate their degree of confidence on a scale -- e.g., on a scale form 0 to 10. Both approaches are reasonable, but they often lead to different membership degrees. In this paper, we analyze the relation between these two approaches, and we show that this relation can be made much clearer if we use models from quantum computing.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-13-01, January 2013
Updated version UTEP-CS-13-01a, April 2013
Aggregation Operations from Quantum Computing
Lidiane Visintin, Adriano Maron, Renata Reiser, Ana Maria Abeijon, and Vladik Kreinovich

Published in Proceedings of the 2013 International Conference on Fuzzy Systems FUZZ-IEEE'2013, Hyderabad, India, July 7-10, 2013, pp. 1124-1131.

Computer systems based on fuzzy logic should be able to generate an output from the handling of inaccurate data input by applying a rule based system. The main contribution of this paper is to show that quantum computing can be used to extend the class of fuzzy sets. The central idea associates the states of a quantum register to membership functions (mFs) of fuzzy subsets, and the rules for the processes of fuzzyfication are performed by unitary qTs. This paper introduces an interpretation of aggregations obtained by classical fuzzy states, that is, by multi-dimensional quantum register associated to mFs on unitary inter- val U. In particular, t-norms and t-conorms based on quantum gates, allow the modeling and interpretation of union, intersection, difference and implication among fuzzy sets, also including an expression for the class of fuzzy S-implications. Furthermore, an interpretation of the symmetric sum was achieved by considering the sum of related classical fuzzy states. For all cases, the measurement process performed on the corresponding quantum registers yields the correct interpretation for all the logical operators.

Original file in pdf
updated version in pdf