University of Texas at El Paso
Computer Science Department
Abstracts of 2010 Reports


Technical Report UTEP-CS-10-60, December 2010
Revised version UTEP-CS-10-60a, January 2011
Revised version UTEP-CS-10-60b, September 2011
Extreme Distributions on Intervals
Monchaya Chiangpradit, Wararit Panichkitkosolkul, Hung T. Nguyen, and Vladik Kreinovich

Published in Computational Technologies, 2012, Vol. 17, No. 1, pp. 17-25.

One of the main tasks of interval computation is to analyze situations in which we only know the lower and upper bounds on the desired quantity, i.e., we only know an interval that contains this quantity. One of the objectives of such analysis is to make decisions. According to decision theory, a consistent decision making procedure is equivalent to assigning probabilities to different values within each interval. Thus, we arrive at the problem of describing a natural probability distribution on an interval. In this paper, we describe such a distribution for the practically important case of the "weakest link" arrangement, when the collapse of each link is catastrophic for a system. This situation occurs in fracture mechanics, when a fracture in one of the areas makes the whole plane inoperable, in economics, when the collapse of one large bank or one country can have catastrophic consequences, etc.

Original file in pdf
first updated version in pdf
second updated version in pdf


Technical Report UTEP-CS-10-59, December 2010
Universal Approximation with Uninorm-Based Fuzzy Neural Networks
Andre Lemos, Vladik Kreinovich, Walmir Caminhas, and Fernando Gomide

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20.

Fuzzy neural networks are hybrid models capable to approximate functions with high precision and to generate transparent models, enabling the extraction of valuable information from the resulting topology. In this paper we will show that the recently proposed fuzzy neural network based on weighted uninorms aggregations uniformly approximates any real functions on any compact set. We will describe the network topology and inference mechanism and show that the universal approximation property of this network is valid for a given choice of operators.

File in pdf


Technical Report UTEP-CS-10-58, December 2010
Why Curvature in L-Curve: Combining Soft Constraints
Uram Anibal Sosa Aguirre, Martine Ceberio, and Vladik Kreinovich

Preliminary version published in Proceedings of the Fourth International Workshop on Constraint Programming and Decision Making CoProD'11, El Paso, Texas, March 17, 2011; final version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 175-180.

In solving inverse problems, one of the successful methods of determining the appropriate value of the regularization parameter is the L-curve method of combining the corresponding soft constraints, when we plot the curve describing the dependence of the logarithm $x$ of the mean square difference on the logarithm $y$ of the mean square non-smoothness, and select a point on this curve at which the curvature is the largest. This method is empirically successful, but from the theoretical viewpoint, it is not clear why we should use curvature and not some other criterion. In this paper, we show that reasonable scale-invariance requirements lead to curvature and its generalizations.

File in pdf


Technical Report UTEP-CS-10-57, December 2010
Adding Constraints -- A (Seemingly Counterintuitive but) Useful Heuristic in Solving Difficult Problems
Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

Preliminary version published in Proceedings of the Fourth International Workshop on Constraint Programming and Decision Making CoProD'11, El Paso, Texas, March 17, 2011; final version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 79-84.

Intuitively, the more constraints we impose on a problem, the more difficult it is to solve it. However, in practice, difficult-to-solve problems sometimes get solved when we impose additional constraints and thus, make the problems seemingly more complex. In this methodological paper, we explain this seemingly counter-intuitive phenomenon, and we show that, dues to this explanation, additional constraints can serve as a useful heuristic in solving difficult problems.

File in pdf


Technical Report UTEP-CS-10-56, December 2010
How to Bargain: An Interval Approach
Vladik Kreinovich, Hung T. Nguyen, and Songsak Sriboonchitta

Published in International Journal of Intelligent Technologies and Applied Statistics (IJITAS), 2011, Vol. 4, No. 2, pp. 147-164.

In many real-life situations, we need to bargain. What is the best bargaining strategy? If you are already in a negotiating process, your previous offer was a, the seller's last offer was A > a, what next offer a' should you make? A usual commonsense recommendation is to "split the difference", i.e., to offer a' = (a + A) / 2, or, more generally, to offer a linear combination a' = k * A + (1 - k) * a (for some parameter k from the interval (0,1)).

The bargaining problem falls under the scope of the theory of cooperative games. In cooperative games, there are many reasonable solution concepts. Some of these solution concepts -- like Nash's bargaining solution that recommends maximizing the product of utility gains -- lead to offers that linearly depend on a and A; other concepts lead to non-linear dependence. From the practical viewpoint, it is desirable to come up with a recommendation that would not depend on a specific selection of the solution concept -- and on specific difficult-to-verify assumptions about the utility function etc.

In this paper, we deliver such a recommendation: specifically, we show that under reasonable assumption, we should always select an offer that linearly depends on a and A.

File in pdf


Technical Report UTEP-CS-10-55, November 2010
Visualization Queries
Nicholas Del Rio and Paulo Pinheiro da Silva

This paper introduces the notion of a query that describes a visualization request in terms of a universe of concepts, which includes but is not limited to Views, Operators, and Parameters. In the presence of specifications based on these concepts, users may be able to request for what properties they want to see in their visualizations (i.e., the graphical analogues of some dataset) without having to specify how to generate them, and more importantly, without being fully aware of a wide range of toolkit specific implementation details. The result of a query is a visualization that satisfies declarative user-defined criteria for creating visualizations. This paper explores the requirements for visualization queries and exemplifies how such queries could be used to drive the generation of gravity contour maps using two popular visualization toolkits. Additionally, the paper highlights the infrastructure requirements that could support visualization queries.

File in pdf


Technical Report UTEP-CS-10-54, November 2010
A New Answer to Pauli's Question: Almost All Quantum States Can Be Uniquely Determined by Measuring Location and Momentum
Don Jackson and Olga Kosheleva

Published in Journal of Uncertain Systems, 2012, Vol. 6, No. 2, pp. 100-103.

In classical mechanics, we can uniquely reconstruct the state of each particle by measuring its spatial location and momentum. In his 1958 paper, W. Pauli, one of the founders of quantum mechanics, conjectured that the same should be true in the quantum case as well: that every quantum state can be uniquely determined by measuring spatial location and momentum. This conjecture was disproven: there are pairs of physically different states that cannot be distinguished if we only measure special location and momentum. A natural question is: how frequent are such pairs? In this paper, we show that almost all quantum states can be uniquely reconstructed by measuring spatial location and momentum. Thus, in practice, Pauli's conjecture is true.

File in pdf


Technical Report UTEP-CS-10-53, November 2010
Why L2 topology in quantum physics
Chris Culellar, Evan Longpre, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2012, Vol. 6, No. 2, pp. 92-99.

It is known that in quantum mechanics, the set S of all possible states coincides with the set of all the complex-valued functions f(x) for which the integral of |f(x)|2 is 1. From the mathematical viewpoint, this set is a unit sphere in the space L2 of all the functions for which this integralis finite. Because of this mathematical fact, usually the set S is considered with the topology induced by L2, i.e., topology in which the basis of open neighborhood of a state f is formed by the open balls. This topology seem to work fine, but since this is a purely mathematical definition, a natural question appears: does this topology have a physical meaning? In this paper, we show that a natural physical definition of closeness indeed leads to the usual L2-topology.

File in pdf


Technical Report UTEP-CS-10-52, November 2010
Updated version UTEP-CS-10-52a, December 2010 From Program Synthesis to Optimal Program Synthesis
Joaquin Reyna

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many practical situations, we know the values of some quantities x1, ..., xn, we know the relations between these quantities, the desired quantity y, and maybe some auxiliary quantities, and we want to estimate y. There exist automatic tools for such estimations -- called program synthesis tools.

A program synthesis tool usually generates a program for computing y. In many cases, however, several such programs are possible, and it is desirable to generate the optimal (e.g., the fastest) program. In this paper, we describe algorithms aimed at such optimal program synthesis.

The problem can be interpreted in logical terms, as assigning fuzzy-style degrees to rules describing relations between variables.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-51, November 2010
Updated version UTEP-CS-10-51a, December 2010
Fusing Continuous and Discrete Data, on the Example of Merging Seismic and Gravity Models in Geophysics
Omar Ochoa, Aaron Velasco, and Vladik Kreinovich

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many application areas, we need to fuse continuous and discrete models of the same phenomena. For example, in geophysics, we have two main models for describing how the sound velocity changes with location and depth: a discrete gravity-based model, in which we have several layers with abrupt transition between layers, and a seismic model, in which the velocity continuously changes with the change in location and depth -- and a transition is represented by a steeper change. Due to inevitable uncertainty, in two fused models, the same actual transition is placed at slightly different depths.

If we simply fuse these models, the fused model will inherit both nearby transitions and therefore, will, misleadingly, correspond to two nearby transitions instead of one. It is therefore necessary, before fusing, to first get a fused (more accurate) location of the transition surface.

In this paper, we show how to find such a location.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-50, November 2010
Updated version UTEP-CS-10-50a, December 2010
Testing Shock Absorbers: Towards a Faster Parallelizable Algorithm
Christian Servin

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

Cars are equipped with shock absorbers, which are designed to smooth out the shocks on the road. In practice, there is a need to test them. To test the shock absorbers, we need to estimate the values of the shock absorber's parameters. If we did not have any measurement errors, then two measurements would be sufficient to determine the parameters. However, in reality, there are measurement errors. Usually, in engineering practice, it is assumed that the errors are normally distributed with 0 mean, so we can use least squares method to test it. In practice, we often only know the upper bound on the measurement errors, so we have interval uncertainty. In principle, the problem of determining the parameters of the shock absorber under interval uncertainty can be solved by reducing it to several linear programming problems. However, linear programming problems take a reasonably long time O(n3.5). A natural way to speed up computations is to parallelize the algorithm. However, it is known that linear programming is provably the most difficult problem to parallelize. So instead, we propose a new algorithm for finding ranges for the shock absorber's parameters, an algorithm which is not only faster but also easy-to-parallelize.

Original file in pdf
Updated version in pdf


Technical Report UTEP-CS-10-49, November 2010
CleanJava: A Formal Notation for Functional Program Verification
Yoonsik Cheon, Cesar Yeep and Melisa Vela

Unlike a Hoare-style program verification, a functional program verification supports forward reasoning by viewing a program as a mathematical function from one program state to another and proving its correctness by essentially comparing two mathematical functions, the function computed by the program and its specification. Since it requires a minimal mathematical background and reflects the way programmers reason about the correctness of a program informally, it can be taught and practiced effectively. However, there is no formal notation supporting the functional program verification. In this paper, we propose a formal notation for writing functional program specifications for Java programs. The notation, called CleanJava, is based on the Java expression syntax and is extended with a mathematical toolkit consisting of sets and sequences. The vocabulary of CleanJava can also be enriched by introducing user-specified definitions such as user-defined mathematical functions and specification-only methods. We believe that CleanJava is a good notation for writing functional specifications and expect it to promote the use of functional program verifications by being able to specify a wide range of Java programs.

File in PDF and in Compressed Postscript


Technical Report UTEP-CS-10-48, November 2010
Power vs. Performance Evaluation of Synthetic Aperture Radar Image-Formation Algorithms and Implementations for Embedded HEC Environments (Ongoing Study)
Ricardo Portillo, Sarala Arunagiri, and Patricia J. Teller

The continuing miniaturization and parallelization of processing hardware has facilitated the development of mobile and field-deployable systems that can accommodate terascale processing within once prohibitively small size and weight constraints. Unfortunately, the added computational capability of these small systems often comes at the cost of larger power demands, an already strained resource in these embedded systems. This study explores the power issues in a specific type of field-deployable system, Mobile Radar. Specifically, we focus on a computationally intensive phase of Synthetic Aperture Radar, Image Formation (IF), and evaluate performance tradeoffs in terms of time-to-solution, output image quality, and power consumption for two different implementations, single- and double-precision, of two different IF algorithms, one frequency-domain based and the other time-domain based. Preliminary results show that in some CPU-based instances single-precision IF leads to significant reductions in time-to-solution and, thus, total energy consumption (over 50%) with negligible but possibly acceptable SAR image output degradation. In the near future, this ongoing study will reevaluate these results, i.e., SAR IF power consumption vs. performance tradeoffs with more sophisticated IF workloads and output quality metrics, finer-grain performance and power measurement methodologies, and more computationally powerful embedded HEC devices, i.e., GPGPUs.

File in pdf


Technical Report UTEP-CS-10-47, October 2010
Updated version UTEP-CS-10-47a, December 2010
Towards Optimal Placement of Bio-Weapon Detectors
Chris Kiekintveld and Octavio Lerma

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

Biological weapons are difficult and expensive to detect. Within a limited budget, we can afford a limited number of bio-weapon detector stations. It is therefore important to find the optimal locations for such stations. A natural idea is to place more detectors in the areas with more population -- and fewer in desert areas, with fewer people. However, such a commonsense analysis does not tell us how many detectors to place where. To decide on the exact placement of bio-weapon detectors, we formulate the placement problem in precise terms, and come up with an (almost) explicit solution to the resulting optimization problem.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-46, October 2010
Short version UTEP-CS-10-46a, December 2010
Under Physics-Motivated Constraints, Generally-Non-Algorithmic Computational Problems Become Algorithmically Solvable
Extended version UTEP-CS-10-46b, May 2011
Updated version UTEP-CS-10-46c, March 2012
Negative Results of Computable Analysis Disappear If We Restrict Ourselves to Random (Or, More Generally, Typical) Inputs
Vladik Kreinovich

Short version published in Proceedings of the Fourth International Workshop on Constraint Programming and Decision Making CoProD'11, El Paso, Texas, March 17, 2011; more detailed versions published in Mathematical Structures and Modeling, 2012, Vol. 25, pp. 100-113 and in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 85-90.

It is well known that many computational problems are, in general, not algorithmically solvable: e.g., it is not possible to algorithmically decide whether two computable real numbers are equal, and it is not possible to compute the roots of a computable function. We propose to constraint such operations to certain "sets of typical elements" or "sets of random elements".

In our previous papers, we proposed (and analyzed) physics-motivated definitions for these notions. In short, a set T is a set of typical elements if for every definable sequences of sets An for which each An is a subset of An+1 and the intersection of all An is empty, there exists an N for which T has no common elements with AN; the definition of a set of random elements with respect to a probability measure P is similar, with the condition that the intersection of all An is empty replaced by a more general condition that this intersection has probability 0.

In this paper, we show that if we restrict computations to such typical or random elements, then problems which are non-computable in the general case -- like comparing real numbers or finding the roots of a computable function -- become computable.

Original file in pdf
Short version in pdf
Extended version in pdf
Updated version in pdf
Final version in pdf


Technical Report UTEP-CS-10-45, October 2010
Towards Simpler Description of Properties like Commutativity and Associativity: Using Expression Fragments
Shubhra Datta, Valeria Fierro, Krasen Petrov, Jessica Romo, Gesuri Ramirez, and Cesar Valenzuela

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 84-89.

Properties like commutativity and associativity are very important in many applications. To make detecting these properties easier, it is desirable to reformulate them in the simplest possible way. A known way to simplify these descriptions is to use postfix notations, in which there is no need to use parentheses. We show that some of these properties can be further simplified if we represent them as equalities not between well-defined expressions, but as equalities between expression fragments.

File in pdf


Technical Report UTEP-CS-10-44, October 2010
Uncertainty in Partially Ordered Sets as a Natural Generalization of Intervals: Negative Information Is Sufficient, Positive Is Not
David Mireles and Olga Kosheleva

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 96-101.

In many real-life applications, we have an ordered set: a set of all space-time events, a set of all alternatives, a set of all degrees of confidence. In practice, we usually only have a partial information about an element x of this set. This partial information includes positive knowledge: that a <= x or x <= a for some known a, and negative knowledge: that it is not true that a <= x or x <= a for some known a. In the case of a total order, the set of all elements satisfying this partial information is an interval. We show that in the general case of a partial order, the corresponding analogue of an interval is a convex set. We also show that in general, to describe partial knowledge, it is sufficient to have only negative information about x but it is not sufficient to have only positive information.

File in pdf


Technical Report UTEP-CS-10-43, October 2010
Why Feynman Path Integration?
Jaime Nava, Juan Ferret, Vladik Kreinovich, Gloria Berumen, Sandra Griffin, and Edgar Padilla

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 102-110.

To describe physics properly, we need to take into account quantum effects. Thus, for every non-quantum physical theory, we must come up with an appropriate quantum theory. A traditional approach is to replace all the scalars in the classical description of this theory by the corresponding operators. The problem with the above approach is that due to non-commutativity of the quantum operators, two mathematically equivalent formulations of the classical theory can lead to different (non-equivalent) quantum theories. An alternative quantization approach that directly transforms the non-quantum action functional into the appropriate quantum theory, was indeed proposed by the Nobelist Richard Feynman, under the name of path integration. Feynman path integration is not just a foundational idea, it is actually an efficient computing tool (Feynman diagrams).

From the pragmatic viewpoint, Feynman path integral is a great success. However, from the foundational viewpoint, we still face an important question: why the Feynman's path integration formula? In this paper, we provide a natural explanation for Feynman's path integration formula.

File in pdf


Technical Report UTEP-CS-10-42, October 2010
Expanding Algorithmic Randomness to the Algebraic Approach to Quantum Physics: Kolmogorov Complexity and Quantum Logics
Vladik Kreinovich

Published in Journal of Uncertain System, 2011, Vol. 5, No. 1, pp. 90-95.

Physicists usually assume that events with a very small probability cannot occur. Kolmogorov complexity formalizes this idea for non-quantum events. We show how this formalization can be extended to quantum events as well.

File in pdf


Technical Report UTEP-CS-10-41, October 2010
Equivalence of Gian-Carlo Rota Poset Approach and Taylor Series Approach Extended to Variant Ligands
Jaime Nava and Vladik Kreinovich

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 111-118.

In many practical situations, molecules can be obtained from a "template" molecule like benzene by replacing some of its hydrogen atoms with ligands (other atoms or atom groups). There can be many possible replacements of this type. To avoid time-consuming testing of all possible replacements, it is desirable to test some of the replacements and then extrapolate to others -- so that only the promising molecules, for which the extrapolated values are desirable, will have to be synthesized and tested.

For this extrapolation, D. J. Klein and co-authors proposed to use a poset extrapolation technique developed by G.-C. Rota from MIT. One of the limitations of this approach is that this technique has been originally proposed on a heuristic basis, with no convincing justification of its applicability to chemical (or other) problems. In our previous paper, we showed that for the case when all the ligands are of the same type, the poset technique is actually equivalent to a more familiar (and much more justified) Taylor series extrapolation. In this paper, we show that this equivalence can be extended to the case when we have variant ligands.

File in pdf


Technical Report UTEP-CS-10-40, October 2010
Towards a Fast, Practical Alternative to Joint Inversion of Multiple Datasets: Model Fusion
Omar Ochoa, Aaron A. Velasco, and Christian Servin

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 119-136.

In many areas of science and engineering, we have different sources of data. For example, in geophysics, there are many sources of data for Earth models: first-arrival passive seismic data (from the actual earthquakes), first-arrival controlled-source seismic data (from the seismic experiments), gravity data, etc.

Datasets coming from different sources can provide complimentary information. In general, some of the datasets provide better accuracy and/or spatial resolution in some spatial areas and in some depths, while other datasets provide a better accuracy and/or spatial resolution in other areas or depths. For example: each gravity data points describes the result of measuring the gravity field at some spatial location; this field is generated by the joint effects of many locations; as a result, gravity generally measures the average density over a reasonably large spatial region. Thus, estimates based on gravity measurements have (relatively) low spatial resolution. In contrast, seismic waves generally travel a narrow trajectory from a seismic source (earthquake or explosion) to a recording senor. Thus, the spatial resolution corresponding to this data is much higher than gravity.

At present, each of these datasets is often processed separately, resulting in several different models reflecting different aspects of the studied phenomena. It is therefore desirable to combine data from different datasets.

An ideal approach would be to use all the datasets to produce a single model. However, in many research areas -- including geophysics -- there are no efficient algorithms for simultaneously processing all the different datasets. While such joint inversion methods are being developed, as a first step, we propose a practical solution: to fuse the models coming from different datasets.

File in pdf


Technical Report UTEP-CS-10-39, October 2010
Strings Lead to Lattice-Type Causality
Francisco Zapata, Essau Ramirez, Joel A. Lopez, and Olga Kosheleva

Published in Journal of Uncertain System, 2011, Vol. 5, No. 2, pp. 154-160.

Traditional causality relation of special relativity is a lattice in the simplest case of 1-D space (2-D space-time), but it is no longer a lattice in the actual 3-D space (and 4-D space-time). We show that if we take into account effects of string theory, then we get a lattice-type causality relation even for the 4-D space-time.

File in pdf


Technical Report UTEP-CS-10-38, October 2010
A Use Case-Guided Comparison of OPM and PML
Paulo Pinheiro da Silva and Steve Roach

The World Wide Web Consortium (W3C) Provenance Incubator Group has the goal of providing "a state-of-the art understanding ... in the area of provenance for Semantic Web technologies." To enhance the mutual understanding of language capabilities, the group is attempting to map several of the existing provenance languages such as Provenir, PREMIS, and the Proof Markup Language (PML) into the provenance language the Open Provenance Model (OPM). OPM is intended to be a precise inter-lingua for exchanging provenance information. This article contributes to the understanding of the capabilities of OPM and PML by establishing a set of six common and relatively simple provenance use cases and comparing the OPM and PML models and implications of those models. A provenance use case consists of a scenario and a provenance question associated with the scenario. Some of the use cases are taken from the OPM specification document. The use cases in this article may be considered essential for any provenance encoding intended to be used for provenance interoperability. The modeling of the use cases exposes a number of substantial difficulties in creating and interpreting OPM specifications for use by machine reasoning systems.

File in pdf


Technical Report UTEP-CS-10-37, October 2010
Revised version UTEP-CS-10-37a, december 2010
Fundamental Physical Equations Can Be Derived By Applying Fuzzy Methodology to Informal Physical Ideas
Eric Gutierrez and Vladik Kreinovich

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

Fuzzy methodology transforms expert ideas -- formulated in terms of words from natural language -- into precise rules and formulas. In this paper, we show that by applying this methodology to intuitive physical ideas, we can get fundamental physical equations. This fact provides an additional justification for the fuzzy methodology.

Original file in pdf
Revised version in pdf


Technical Report UTEP-CS-10-36, October 2010
Revised version UTEP-CS-10-36a, December 2010
How to Tell When a Product of Two Partially Ordered Spaces Has a Certain Property: General Results with Application to Fuzzy Logic
Francisco Zapata, Olga Kosheleva, and Karen Villaverde

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In this paper, we describe how checking whether a given property F is true for a product A1 X A2 of partially ordered spaces can be reduced to checking several related properties of the original spaces Ai.

This result is useful in fuzzy logic, where, to compare our degree of confidence in several statements, we often need to combine relative confidence comparison results provided by different experts. For example, Cartesian product corresponds to the cautious approach, when our confidence in S' is higher than confidence in S if and only if all the experts are more confident in S' than in S. Alternatively, if we have an absolute confidence in the first expert and we use the opinion of the second expert only if the first expert cannot decide, we get a lexicographic product.

Original file in pdf
Updated file in pdf


Technical Report UTEP-CS-10-35, October 2010
Updated version UTEP-CS-10-35a, December 2010
Detailed version UTEP-10-35b, May 2011
Computations under Time Constraints: Algorithms Developed for Fuzzy Computations Can Help
Karen Villaverde, Olga Kosheleva, and Martine Ceberio

Short version published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011; detailed version published in Journal of Uncertain Systems, 2012, Vol. 6, No. 2, pp. 138-145.

In usual computers -- that use binary representation of real numbers -- an irrational real number (and even a rational number like 1.3 or 1.2) can only be computed with a finite accuracy. The more accuracy we need, the larger the computation time. It is therefore reasonable to characterize the complexity of computing a real number a by the accuracy D(t) that we can achieve in time t. Once we know this characteristic for two numbers a and b, how can we compute a similar characteristic for, e.g., c = a + b$? In this paper, we show that the problem of computing this characteristic can be reduced to the problem of computing the membership function for the sum -- when we use Zadeh's extension principle with algebraic product as the "and"-operation. Thus, known algorithms for computing this membership function can be used to describe computations under time constraints.

Original file in pdf
Revised version in pdf
Detailed version in pdf


Technical Report UTEP-CS-10-34, October 2010
Revised version UTEP-CS-10-34a, December 2010
From Single to Double Use Expressions, with Applications to Parametric Interval Linear Systems: On Computational Complexity of Fuzzy and Interval Computations
Joe Lorkowski

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many practical problems, we need to estimate the range of a given expression f(x1, ..., xn) when each input xi belongs to a known interval [xi] -- or when each input xi is described by a known fuzzy set. It is known that this problem is easy to solve when we have a Single Use Expression, i.e., an expression in which each variable xi occurs only once. In this paper, we show that for similarly defined Double Use Expressions, the corresponding range estimation problem is NP-hard. Similar problems are analyzed for the problem of solving linear systems under interval (and fuzzy) uncertainty.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-33, October 2010
Revised version UTEP-CS-10-33a, December 2010
Reducing Over-Conservative Expert Failure Rate Estimates in the Presence of Limited Data: A New Probabilistic/Fuzzy Approach
Carlos Ferregut, F. Joshua Campos, and Vladik Kreinovich

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

Unique highly reliable components are typical for aerospace industry. For such components, due to their high reliability and uniqueness, we do not have enough empirical data to make statistically reliable estimates about their failure rate. To overcome this limitation, the empirical data is usually supplemented with expert estimates for the failure rate. The problem is that experts tend to be -- especially in aerospace industry -- over-cautious, over-conservative; their estimates for the failure rate are usually much higher than the actual observed failure rate. In this paper, we provide a new fuzzy-related statistically justified approach for reducing this over-estimation.

Original file in pdf
revised version in pdf


Technical Report UTEP-CS-10-32, October 2010
Revised version UTEP-CS-10-32a, December 2010
Towards Optimal Sensor Placement in Multi-Zone Measurements
Octavio Lerma, Craig Tweedie, and Vladik Kreinovich

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In multi-zone areas, where the boundaries change with time, it is desirable to place sensors in such a way that the boundary is covered at all times. In this paper, we describe the optimal sensor placement with this property. In this optimal placement, sensors are placed along a see-saw trajectory going between the current location of the boundary and its farthest future location.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-31, October 2010
Revised version UTEP-CS-10-31a, December 2010
Least Sensitive (Most Robust) Fuzzy "Exclusive Or" Operations
Jesus E. Hernandez and Jaime Nava

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In natural language, "or" sometimes means "inclusive or" and sometimes means "exclusive or". To adequately describe commonsense and expert knowledge, it is therefore important to have not only t-conorms describing fuzzy "inclusive or" operations, but also fuzzy "exclusive or" operations f(a,b). Since the degrees of certainty are only approximately defined, it is reasonable to require that the corresponding operation be the least sensitive to small changes in the inputs. In this paper, we show that the least sensitive fuzzy "exclusive or" operation has the form f(a,b)=min(max(a,b),max(1-a,1-b)).

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-30, October 2010
Updated version UTEP-CS-10-30a, December 2010
Mamdani Approach to Fuzzy Control, Logical Approach, What Else?
Samuel Bravo and Jaime Nava

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In fuzzy control, two approaches are mainly used: Mamdani's approach, in which we represent the knowledge base as a disjunction of statements Ai(x) & Bi(u) corresponding to individual rules, and logical approach, in which the knowledge base is represented as a conjunction of the rules themselves Ai(x) --> Bi(u). Both approaches are known not to be perfect, so a natural question arises: what other approaches are possible? In this paper, we describe all possible approaches; alternative approaches use an "exclusive or" operation and correspond, e.g., to the fuzzy transform idea.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-29, October 2010
Revised version UTEP-CS-10-29a, December 2010
Estimating Mean under Interval Uncertainty and Variance Constraint
Ali Jalal Kamali, Luc Longpre, and Misha Koshelev

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many practical situations, we have a sample of objects of a given type. When we measure the values of a certain quantity for these objects, we get a sequence of values x1, ..., xn. When the sample is large enough, then the arithmetic mean E of the values xi is a good approximation for the average value of this quantity for all the objects from this class.

The values xi come from measurements, and measurements are never absolutely accurate. Often, the only information that we have about the measurement error is the upper bound Di on this error. In this case, once we have the measurement result Xi, the condition that |Xi-xi| <= Di implies that the actual (unknown) value xi belongs to the interval [Xi - Di, Xi + Di].

In addition, we often know the upper bound V0 on the variance V of the actual values -- e.g., we know that the objects belong to the same species, and we know that within-species differences cannot be too high. In such cases, to estimate the average over the class, we need to find the range of possible values of the mean under the constraints that each xi belongs to the given interval [xi] and that the variance V(x1, ..., xn) is bounded by a given value V0. In this paper, we provide efficient algorithms for computing this range.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-28, October 2010
Updated version UTEP-CS-10-28a, December 2010
Computing the Range of Variance-to-Mean Ratio under Interval and Fuzzy Uncertainty
Sio-Long Lo and Gang Xiang

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many practical problems such as radar imaging, it is useful to compute the variance-to-mean ratio. The need is important because for the sum of k identical independent signal components, both the variance and the mean are multiplied by k, so this ratio is independent on k and thus, provides useful information about the components. In practice, we only know the samples values with uncertainty. It is therefore necessary to compute the variance-to-mean ratio under this uncertainty. In this paper, we present efficient algorithms for computing this ratio under interval and fuzzy uncertainty.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-27, October 2010
Updated version UTEP-CS-10-27a, December 2010
Towards Chemical Applications of Dempster-Shafer-Type Approach: Case of Variant Ligands
Jaime Nava

Published in Proceedings of the 30th Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2011, El Paso, Texas, March 18-20, 2011.

In many practical situations, molecules can be obtained from a "template" molecule like benzene by replacing some of its hydrogen atoms with ligands (other atoms or atom groups). There can be many possible replacements of this type. To avoid time-consuming testing of all possible replacements, it is desirable to test some of the replacements and then extrapolate to others -- so that only the promising molecules, for which the extrapolated values are desirable, will have to be synthesized and tested.

For this extrapolation, D. J. Klein and co-authors proposed to use a Dempster-Shafer-type poset extrapolation technique developed by G.-C. Rota from MIT. One of the limitations of this approach is that this technique has been originally proposed on a heuristic basis, with no convincing justification of its applicability to chemical (or other) problems. In our previous paper, we showed that for the case when all the ligands are of the same type, the poset technique is actually equivalent to a more familiar (and much more justified) Taylor series extrapolation. In this paper, we show that this equivalence can be extended to the case when we have variant ligands.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-26, September 2010
A Tutorial on Functional Program Verification
Yoonsik Cheon and Melisa Vela

This document gives a quick tutorial introduction to a functional program verification. In the functional program verification, a program is viewed as a mathematical function from one program state to another, and proving its correctness is essentially comparing two mathematical functions, the function computed by the program and the specification of the program, called an intended function. The reader is assumed to have some programming experience and to be familiar with such mathematical concepts as sets and functions.

File in PDF and in Compressed Postscript


Technical Report UTEP-CS-10-25, September 2010
Computing Standard-Deviation-to-Mean and Variance-to-Mean Ratios under Interval Uncertainty Is NP-Hard
Sio-Long Lo

Published in Journal of Uncertain Systems, 2015, Vol. 9, No. 2, pp. 124-132.

Once we have a collection of values x1, ..., ,xn corresponding a class of objects, a usual way to decide whether a new object with the value x of the corresponding property belongs to this class is to check whether the value x belongs to interval [E - k0 * s, E + k0 * s], where E = (1/n) * (x1 + ... + xn) is the sample mean, V = s^2 = (1/n) * ((x1 - E)^2 + ... + (xn - E)^2) is the sample variance, and the parameter k0 is determined by the degree of confidence with which we want to make the decision. For each value x, the degree of confidence that x belongs to the class depends on the smallest value k for which x belongs to the interval [E - k * s, E + k * s], i.e., on the ratio r = 1/k = s/(E - x). In practice, we often only know the intervals [xi] that contain the actual values xi. Different values xi from these intervals lead, in general, to different values of r, so it is desirable to compute the range [r] of corresponding values of r. Polynomial-time algorithms are known for computing [r] under certain conditions; whether it is possible that [r] can be computed in polynomial time was unknown. In this paper, we prove that the problem of computing [r] is NP-hard. A similar NP-hardness result is proven for a similar ratio V/E that is used in clustering.

File in pdf


Technical Report UTEP-CS-10-24, September 2010
Visualization to Support the Discovery of Prosodic Contours Related to Turn-Taking
Nigel Ward and Joshua L. McCartney

Some meaningful prosodic patterns can be usefully represented with pitch contours, however the development of such descriptions is a labor-intensive process. To assist in the discovery of contours, visualization tools may be helpful. Edlund et al. (2009) presented the idea of superimposing hundreds of pitch curves from a corpus as a way to see the overall patterns. In this paper we refine and extend this method and illustrate its utility in the discovery of a prosodic cue to back-channels in Chinese. We also discuss issues in relating a contour-based description to one in terms of a conjunction of features, and illustrate this with a prosodic cue to back-channels in Spanish.

File in pdf


Technical Report UTEP-CS-10-23, August 2010
Functional Specification and Verification of Object-Oriented Programs
Yoonsik Cheon

One weakness of Hoare-style verification techniques based on first-order predicate logic is that reasoning is backward from postconditions to preconditions. A natural, forward reasoning is possible by viewing a program as a mathematical function that maps one program state to another. This functional program verification technique requires a minimal mathematical background as it uses equational reasoning based on sets and functions. Thus, it can be easily taught and used in practice. In this paper, we formalize a functional program specification and verification technique and extend it for object-oriented programs. Our approach allows one to formally specify and verify the behavior of an object-oriented program in a way that is natural and closer to the way one reasons about it informally.

File in PDF and in Compressed Postscript


Technical Report UTEP-CS-10-22, August 2010
Studies in the Use of Time Into Utterance as a Predictive Feature for Language Modeling
Nigel Ward and Alejandro Vega

In (Ward and Vega 2008) we examined how how word probabilities vary with time into utterance, and proposed a method for using this information to improve a language model. In this report we examine some ancillary issues in the modeling and exploitation of these regularities.

File in pdf


Technical Report UTEP-CS-10-21, August 2010
Towards an Efficient Bisection of Ellipsoids
Paden Portillo, Martine Ceberio, and Vladik Kreinovich

Preliminary version published in Proceedings of the ITEA Live-Virtual-Constructive Conference "Test and Evaluation", El Paso, Texas, January 24-27, 2011; final version published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 137-142.

Constraints are often represented as ellipsoids. One of the main advantages of such constrains is that, in contrast to boxes, over which optimization of even quadratic functions is NP-hard, optimization of a quadratic function over an ellipsoid is feasible. Sometimes, the area described by constrains is too large, so it is reasonable to bisect this area (one or several times) and solve the optimization problem for all the sub-areas. Bisecting a box, we still get a box, but bisecting an ellipsoid, we do not get an ellipsoid. Usually, this problem is solved by enclosing the half-ellipsoid in a larger ellipsoid, but this slows down the domain reduction process. Instead, we propose to optimize the objective functions over the resulting half-, quarter, etc., ellipsoids.

File in pdf


Technical Report UTEP-CS-10-20, August 2010
Constraint-Related Reinterpretation of Fundamental Physical Equations Can Serve as a Built-In Regularization
Vladik Kreinovich, Juan Ferret, and Martine Ceberio

Published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 91-96.

Many traditional physical problems are known to be ill-defined: a tiny change in the initial condition can lead to drastic changes in the resulting solutions. To solve this problem, practitioners regularize these problem, i.e., impose explicit constraints on possible solutions (e.g., constraints on the squares of gradients). Applying the Lagrange multiplier techniques to the corresponding constrained optimization problems is equivalent to adding terms proportional to squares of gradients to the corresponding optimized functionals. It turns out that many optimized functionals of fundamental physics already have such squares-of-gradients terms. We therefore propose to re-interpret these equations -- by claiming that they come not, as it is usually assumed, from unconstrained optimization, but rather from a constrained optimization, with squares-of-gradients constrains. With this re-interpretation, the physical equations remain the same -- but now we have a built-in regularization; we do not need to worry about ill-defined solutions anymore.

File in pdf


Technical Report UTEP-CS-10-19, August 2010
Why Ellipsoid Constraints, Ellipsoid Clusters, and Riemannian Space-Time: Dvoretzky's Theorem Revisited
Karen Villaverde, Olga Kosheleva, and Martine Ceberio

Published in: Martine Ceberio and Vladik Kreinovich (eds.), Constraint Programming and Decision Making, Springer Verlag, Berlin, Heidelberg, 2014, pp. 203-207.

In many practical applications, we encounter ellipsoid constraints, ellipsoid-shaped clusters, etc. A usual justification for this ellipsoid shape comes from the fact that many real-life quantities are normally distributed, and for a multi-variate normal distribution, a natural confidence set (containing the vast majority of the objects) is an ellipsoid. However, ellipsoid appear more frequently than normal distributions (which occur in about half of the cases). In this paper, we provide a new justification for ellipsoids based on a known mathematical result -- Dvoretzky's Theorem.

File in pdf


Technical Report UTEP-CS-10-18, August 2010
Cantor's Paradise Regained: Constructive Mathematics from Brouwer to Kolmogorov to Gelfond
Vladik Kreinovich

Published in: Marcello Balduccini and Tran Cao Son (eds.), Logic Programming, Knowledge Representation, and Nonmonotonic Reasoning: Essays in Honor of Michael Gelfond, 2010, pp. 163-172; also in: Springer Lecture Notes in Artificial Intelligence, 2011, Vol. 6565, pp. 181-190.

Constructive mathematics, mathematics in which the existence of an object means that that we can actually construct this object, started as a heavily restricted version of mathematics, a version in which many commonly used mathematical techniques (like the Law of Excluded Middle) were forbidden to maintain constructivity. Eventually, it turned out that not only constructive mathematics is not a weakened version of the classical one -- as it was originally perceived -- but that, vice versa, classical mathematics can be viewed as a particular (thus, weaker) case of the constructive one. Crucial results in this direction were obtained by M. Gelfond in the 1970s. In this paper, we mention the history of these results, and show how these results affected constructive mathematics, how they led to new algorithms, and how they affected the current activity in logic programming-related research.

File in pdf


Technical Report UTEP-CS-10-17, July 2010
Revised version UTEP-10-17a, September 2010
From Interval and Probabilistic Granules to Granules of Higher Order
Vladik Kreinovich

Published in: Witold Pedrycz and Shyi-Ming Chen (eds.), Granular Computing and Intelligent Systems: Design with Information Granules of Higher Order and Higher Type, Springer-Verlag, Berlin, Heidelberg, 2011, pp. 1-16.

In this chapter, we provide a natural motivation for granules of higher order, and we show that these granules provide a unified description of different uncertainty formalisms such as random sets, Dempster-Shafer approach, fuzzy sets, imprecise probabilities, and Bayesian statistics. We also prove that for fuzzy uncertainty, granules of second order are sufficient.

Original file in pdf
Revised version in pdf


Technical Report UTEP-CS-10-16, June 2010
FemProf: Advancing Females to the Professoriate in Computing
Nestor J. Rodriguez, Richard A. Alo, Sarah Hug, Sangeeta Gad, Nayda Santiago, and Gladys O. Ducodray

Considering both the percentage of science and engineering PhDs awarded to women in the twenty year period 1974-2004 and the data from the National Science 2006 Survey of Earned Doctorates 1974-2004, the U.S. National Academies noted that women in 2004 have attained equality with men in their representation in the Social Sciences and Life Sciences but are still lagging in Physical and Computing Sciences and Engineering. In the top 50 engineering departments in the U.S., women earn one-fourth of the PhD’s granted in Chemical Engineering and 15% in engineering overall (Handelsman et al, 2005). Although women constitute about half of the total workforce in the U.S. and receive half of the degrees in certain scientific fields, they number only one-fifth of the nation’s scientific and technical workers (US-NA, 2007). The University of Puerto Rico at Mayaguez and the Center for Computational Science at the University of Houston Downtown have embarked on a major effort to change these statistics. This report summarizes the results of the NSF BPC-DP Demonstration project "Paving the Road to Professorship for Female Students" (CNS 0739213) (now known as FemProf) during the period of February, 2008 to June, 2010. This program between the two institutions stems from collaborative efforts with the NSF BPC Computing Alliance for Hispanic Serving Institutions (CAHSI) (CNS 0540592, CNS- 0837556). FemProf is a comprehensive program engaging female undergraduates in a mentoring program that involves a focus on: research experiences, cultural, gender and workplace biases/issues workshops and seminars combined with summer internships - all focused on preparing for success in graduate school and directed to future participation in the professoriate. This report summarizes the main results of the project.

File in pdf


Technical Report UTEP-CS-10-15, June 2010
Updated version Report UTEP-CS-10-15c, June 2014
Toward Computing an Optimal Trajectory for an Environment-Oriented Unmanned Aerial Vehicle (UAV) under Uncertainty
Jerald Brady, Octavio Lerma, Vladik Kreinovich, and Craig Tweedie

Published in Journal of Uncertain Systems, 2015, Vol. 9, No. 2, pp. 84-94.

Over the past decade a few but increasing number of researchers have begun using Unmanned Aerial Vehicles (UAVs) to expand and improve upon existing remote sensing capabilities in the Arctic. Due to the limited flight time, it is important to make sure that the UAV follows an optimal trajectory -- in which it cover all the points from a given area within the smallest possible trajectory length. Under the usual assumptions that we cover a rectangular area and that each on-board sensor covers all the points with a given radius r, we describe the optimal trajectory. A more complex optimal trajectory is also developed for the situations in which we need to get a more spatially detailed picture of some sub-regions of interest (in which we should have a smaller value r) and it is sufficient to get a less detailed picture (with larger r) in other sub-regions. We also describe the best ways to cover the trajectory in situations in which an UAV missed a spot -- due to excess wind or to an inexact control.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-14, June 2010
Revised version UTEP-CS-10-14b, May 2011
How to Define a Confidence Set for Functions: A New Justification of the Area Method
Vladik Kreinovich, Gang Xiang, and Michael Oberguggenberger

Published in International Journal of General Systems, 2011, Vol. 40, No. 7, pp. 727-739.

Due to uncertainty, in many problems, we only know the probability of different values. In such situations, we need to make decisions based on these probabilities: e.g., we must tell the user which values are possible and which are not. Often -- e.g., for a normal distribution -- the probability density is everywhere positive, so, theoretically, all real values are possible. In practice, it is usually safe to assume that values whose probability is very small are not possible. For a single variable, this idea is described by a confidence interval C, the interval for which the probability to be outside is smaller than a given threshold p0. In this way, if we know that a variable x is normally distributed with mean a and standard deviation s, we can conclude that x is within the interval C=[a-k*s,a+k*s], where k depends on p0 (usually, k=2, 3, or 6).

When a random object is a function f(x), we similarly want to find a confidence set C of functions, i.e., the set for which the probability to be outside is smaller than p0. To find such a set, it is possible to use the following area method: define the area I(f) under the graph of f (i.e., in mathematical terms, an integral), select a confidence interval for I(f) and take, as C, the set of all the functions f(x) for which I(f) is within this interval.

At present, the area method is largely heuristic, with no justification explaining why exactly the integral functional I(f) corresponding to the area should be used. In our paper, we provide a justification for the area method.

Original version in pdf
Revised version in pdf


Technical Report UTEP-CS-10-13, June 2010
How to Describe Spatial Resolution: An Approach Similar to the Central Limit Theorem
Omar Ochoa, Martine Ceberio, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2010, Vol. 4, No. 63, pp. 3153-3160.

For spatially distributed quantities v(x), there are two main reasons why the measured value is different from the actual value. First, the sensors are imprecise, so the measured value is slightly different from the actual one. Second, sensors have a finite {\it spatial resolution}: they do not simply measure the value at a single point, they are "blurred", i.e., affected by the values of the nearby points as well. It is known that uncertainty can be often described by the Gaussian distribution. This possibility comes from the Central Limit Theorem, according to which the sum of many independent small measurement errors has an approximately Gaussian distribution. In this paper, we show how a similar technique can be applied to spatial resolution: a combination of several independent small blurrings can be described by a Gaussian blurring function.

File in pdf


Technical Report UTEP-CS-10-12, June 2010
Spatial Resolution for Processing Seismic Data: Type-2 Methods for Finding the Relevant Granular Structure
Vladik Kreinovich, Jaime Nava, Rodrigo Romero, Julio Olaya, Aaron Velasco, and Kate C. Miller

Published in Proceedings of the IEEE International Conference on Granular Computing GrC'2010, Silicon Valley, USA, August 14-16, 2010.

One of the main methods of determining the Earth structure is the analysis of the seismic data. Based on the seismic data, we produce a 3-D map describing the density at different locations on different depths. Due to incomplete coverage and measurement uncertainty, this map provides only an approximate description of the actual density distribution. For example, based on the seismic data, it is impossible to distinguish between the densities at two nearby points. In other words, what we actually reconstruct is not a function of 3 variables, but rather values determined on the appropriate spatial granules. Because of this granularity, it is necessary to find the spatial resolution at different locations and at different depths, i.e., in other words, it is necessary to determine the corresponding granules. In this paper, we show how Type-2 methods can help in determining these granules.

File in pdf


Technical Report UTEP-CS-10-11, June 2010
Optimal Prices in the Presence of Discounts: A New Economic Application of Choquet Integrals
Hung T. Nguyen and Vladik Kreinovich

Published in Proceedings of the International Symposium on Innovative Management, Information, and Production IMIP'2010, Hangzhou, China, October 9-11, 2010, pp. 366-370.

In many real-life situations, there are ``package deals'', when customers who buy two or more different items get a special discount. For example, when planning a trip, a deal including airfare and hotel costs less than the airfare and hotel on their own; there are often also special deals when we combine airfare with car rental, and deals that cover all three items plus tickets to local tourist attractions, etc.

Usually, there are several such deals with different combinations and different discounts. What if we plan to buy several different copies of each item: e.g., we plan a group tour in which some tourists want to rent car, some want to visit certain tourist attractions, etc. What is the best way to use all available discounts? We show that, under reasonable assumptions, the optimal price is provided by the Choquet integral.

File in pdf


Technical Report UTEP-CS-10-10, June 2010
Minimum Description Length (MDL) Principle as a Possible Approach to Arc Detection
Jan Beck, David Nemir, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2010, Vol. 4, No. 63, pp. 3143-3152.

Detecting arcing faults is an important but difficult-to-solve practical problem. In this paper, we show how the Minimum Description Length (MDL) Principle can help in solving this problem.

File in pdf


Technical Report UTEP-CS-10-09, May 2010
Revised version UTEP-CS-10-09b, November 2010
Efficient Algorithms for Heavy-Tail Analysis under Interval Uncertainty
Vladik Kreinovich, Monchaya Chiangpradit, and Wararit Panichkitkosolkul

Published in Annals of Operations Research, 2012, Vol. 195, No. 1, pp. 73-96.

Most applications of statistics to science and engineering are based on the assumption that the corresponding random variables are normally distributed, i.e., distributed according to Gaussian law in which the probability density function d(x) exponentially decreases with x: d(x) ~ exp(-k * x^2). Normal distributions indeed frequently occur in practice. However, there are also many practical situations, including situations from mathematical finance, in which we encounter heavy-tailed distributions, i.e., distributions in which d(x) decreases as d(x) ~ x^(-a). To properly take this uncertainty into account when making decisions, it is necessary to estimate the parameters of such distributions based on the sample data x1, ..., xn -- and thus, to predict the size and the probabilities of large deviations. The most well-known statistical estimates for such distributions are the Hill estimator H for the parameter a and the Weismann estimator W for the corresponding quantiles.

These estimators are based on the simplifying assumption that the sample values xi are known exactly. In practice, we often know the values xi only approximately -- e.g., we know the estimates Xi and we know the upper bounds Di on the estimation errors. In this case, the only information that we have about the actual (unknown) value xi is that xi belongs to the interval [xi] = [Xi - Di, Xi + Di]. Different combinations of values xi from [xi] lead, in general, to different values of H and W. It is therefore desirable to find the ranges [H] and [W] of possible values of H and W. In this paper, we describe efficient algorithms for computing these ranges.

Original file in pdf
revised version in pdf


Technical Report UTEP-CS-10-08, March 2010
A New Eclipse-Based JML Compiler Built Using AST Merging
Amritam Sarcar and Yoonsik Cheon

The Java Modeling Language (JML) is a formal interface specification language to document the behavior of Java program modules and has been used in many research and industrial projects. However, its inability to support Java 5 features such as generics is reducing its user base significantly. Besides, the JML compiler is on average 8.5 times slower than the javac Java compiler. In this paper, we present a new JML compiler built on the Eclipse Java compiler to support Java 5 features. We used a technique called AST merging to implement coarse-grained incremental compilation. In our experiments we observed a significant improvement in compilation speed; the new compiler is 3 to 4.5 times faster than the current one.

File in PDF and in Compressed Postscript


Technical Report UTEP-CS-10-07, March 2010
Access Control Contracts for Java Program Modules
Carlos Rubio and Yoonsik Cheon

Application-level security has become an issue in recent years; for example, errors, discrepancies and omissions in the specification of access control constraints of security-sensitive software components are recognized as an important source for security vulnerabilities. We propose to formally specify access control assumptions or constraints of a program module and enforce them at run-time. We call such specifications access control contracts. To realize access control contracts, we extended the JML language, a formal interface specification language for Java, and developed a prototype support tool that translates access control contracts to runtime checks. The access control contract reduces the vulnerability that a security-sensitive module be exploited to compromise the overall security of a software system. It also facilitates practicing the principle of "security by design" by providing both a practical programming tool and a foundation for formally reasoning about security properties of program modules.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-10-06, February 2010
Updated version UTEP-CS-10-06a, May 2010
Extending Maximum Entropy Techniques to Entropy Constraints
Gang Xiang and Vladik Kreinovich

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2010, Toronto, Canada, July 12-14, 2010, pp. 367-373.

In many practical situations, we have only partial information about the probabilities. In some cases, we have {\em crisp} (interval) bounds on the probabilities and/or on the related statistical characteristics. In other situations, we have {\em fuzzy} bounds, i.e., different interval bounds with different degrees of certainty.

In a situation with uncertainty, we do not know the exact value of the desired characteristic. In such situations, it is desirable to find its worst possible value, its best possible value, and its "typical" value -- corresponding to the "most probable" probability distribution. Usually, as such a "typical" distribution, we select the one with the largest value of the entropy. This works perfectly well in usual cases when the information about the distribution consists of the values of moments and other characteristics. For example, if we only know the first and the second moments, then the distribution with the largest entropy if the normal (Gaussian) one.

However, in some situations, we know the entropy (= amount of information) of the distribution. In this case, the maximum entropy approach does not work, since all the distributions which are consistent with our knowledge have the exact sam e entropy value. In this paper, we show how the main ideas of the maximum entropy approach can be extended to this case.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-05, February 2010
Updated version UTEP-CS-10-05a, May 2010
How to Relate Fuzzy and OWA Estimates
Tanja Magoc and Vladik Kreinovich

Published in Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2010, Toronto, Canada, July 12-14, 2010, pp. 361-366.

In many practical situations, we have several estimates x1, ..., xn of the same quantity x, i.e., estimates for which x1 ~ x, x2 ~ x, ..., and xn ~ x. It is desirable to combine (fuse) these estimates into a single estimate for x. From the fuzzy viewpoint, a natural way to combine these estimates is: (1) to describe, for each x and for each i, the degree mu(xi - x) to which x is close to xi, (2) to use a t-norm ("and"-operation) to combine these degrees into a degree to which x is consistent with all n estimates, and then (3) find the estimate x for which this degree is the largest. Alternatively, we can use computationally simpler OWA (Ordered Weighted Average) to combine the estimates xi. To get better fusion, we must appropriately select the membership function mu(x), the t-norm (in the fuzzy case) and the weights (in the OWA case).

Since both approaches -- when applied properly -- lead to reasonable data fusion, it is desirable to be able to relate the corresponding selections. For example, once we have found the appropriate mu(x) and t-norm, we should be able to deduce the appropriate weights -- and vice versa. In this paper, we describe such a relation. It is worth mentioning that while from the application viewpoint, both fuzzy and OWA estimates are not statistical, our mathematical justification of the relation between them uses results that have been previously applied to mathematical statistics.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-04, February 2010
Runtime Constraint Checking Approaches for OCL, A Critical Comparison
Carmen Avila, Amritam Sarcar, Yoonsik Cheon, and Cesar Yeep

There are many benefits of checking design constraints at runtime---for example, automatic detection of design drift or corrosion. However, there is no comparative analysis of different approaches although such an analysis could provide a sound basis for determining the appropriateness of one approach over the others. In this paper we conduct a comparative analysis and evaluation of different constraint checking approaches possible for the Object Constraint Language (OCL). We compare several approaches including (1) direct translation to implementation languages, (2) use of executable assertion languages, and (3) use of aspect-oriented programming languages. Our comparison includes both quantitative metrics such as runtime performance and qualitative metrics such as maintainability of constraint checking code. We found that there is no universal approach; for the production use the implementation language-based approaches perform better if memory footprint or runtime speed is important, and for the development use the other approaches are more appealing if maintainability is more important.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-10-03, February 2010
Updated version UTEP-10-03a, April 2010
Towards a Natural Proof of Metrization Theorem for Space-Times
Vladik Kreinovich and Olga Kosheleva

Published in the Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2010, Barcelona, Spain, July 18-23, 2010, pp. 3098-3105.

In the early 1920s, Pavel Urysohn proved his famous lemma (sometimes referred to as "first non-trivial result of point set topology"). Among other applications, this lemma was instrumental in proving that under reasonable conditions, every topological space can be metrized.

A few years before that, in 1919, a complex mathematical theory was experimentally proven to be extremely useful in the description of real world phenomena: namely, during a solar eclipse, General Relativity theory -- that uses pseudo-Riemann spaces to describe space-time -- has been (spectacularly) experimentally confirmed. Motivated by this success, Urysohn started working on an extension of his lemma and of the metrization theorem to (causality-)ordered topological spaces and corresponding pseudo-metrics. After Urysohn's early death in 1924, this activity was continued in Russia by his student Vadim Efremovich, Efremovich's student Revolt Pimenov, and by Pimenov's students (and also by H. Busemann in the US and by E. Kronheimer and R. Penrose in the UK). By the 1970s, reasonably general space-time versions of Urysohn's lemma and metrization theorem have been proven.

However, the proofs of these 1970s results are not natural -- in the sense that they looks like clever tricks, not like a direct consequence of the definitions. Since one of the main objectives of this activity is to come up with useful applications to physics, we definitely desire more natural versions of these proofs. In this paper, we show that fuzzy logic leads to such natural proofs.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-02, January 2010
Updated version UTEP-CS-10-02a, April 2010
Towards Improved Trapezoidal Approximation to Intersection (Fusion) of Trapezoidal Fuzzy Numbers: Specific Procedure and General Non-Associativity Theorem
Gang Xiang and Vladik Kreinovich

Published in the Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2010, Barcelona, Spain, July 18-23, 2010, pp. 3120-3125.

In some cases, our uncertainty about a quantity can be described by an interval of its possible values. If we have two or more pieces of interval information about the same quantity, then we can conclude that the actual value belongs to the intersection of these intervals.

In general, we may need a fuzzy number to represent our partial knowledge. A fuzzy number can be viewed as a collection of intervals (alpha-cuts) corresponding to different degrees alpha from [0,1]. In practice, we can only store finitely many alpha-cuts. Usually, we only store the lower and upper alpha-cuts (corresponding to alpha = 0 and alpha = 1) and use linear interpolation -- i.e., use trapezoidal fuzzy numbers. However, the intersection of two trapezoidal fuzzy numbers is, in general, not trapezoidal. One possible approach is to simply take an intersection of lower and upper alpha-cuts, but this approach underestimates the resulting membership function.

In this paper, we propose a more accurate approach that uses the Least Squares Method to provide a better linear approximation to the resulting membership function.

While this method provides a more accurate trapezoidal description of the intersection, it has its own drawbacks: e.g., this approximation method makes the corresponding "knowledge fusion" operation non-associative. We prove, however, that this "drawback" is inevitable: specifically, we prove that a perfect solution is not possible, and that any improved trapezoidal approximation to intersection (fusion) of trapezoidal fuzzy numbers leads to non-associativity.

Original file in pdf
updated version in pdf


Technical Report UTEP-CS-10-01, January 2010
Runtime Assertion Checking for JML on the Eclipse Platform Using AST Merging
Amritam Sarcar

The Java Modeling Language (JML) is a formal behavioral interface specification language for Java. It is used for detailed design documentation of Java program modules such as classes and interfaces. JML has been used extensively by many researchers across various projects and has a large and varied spectrum of tool support. It extends from runtime assertion checking (RAC) to theorem proving.

Amongst these tools, RAC and ESC/Java has been used as a common tool for many research projects. RAC for JML is a tool that checks at runtime for possible violations of any specifications. However, lately there has been a problem for tool support. The problem lies in their ability to keep up with new features being introduced by Java. The inability to support Java 5 features such as generics has been reducing the user base, feedback and the impact of JML usage. Also, the JML2 compiler (jmlc) has a very slow compilation speed. On average, it is about nine times slower than a Java compiler such as javac. It is well understood that jmlc does more work than a Java compiler, hence it would be slower. The jmlc tool uses a double-round strategy for generating RAC code. The performance can be improved by optimizing compilation passes, in particular, by abandoning the double-round compilation strategy.

In this thesis I propose a technique for optimizing compilation speed using a technique known as AST merging with potential performance gain than its predecessor. The jmlc tool parses the source files twice. In the first pass, it parses the source file whereas in the second the source file is parsed again along with the generated RAC code. This affects the performance of the JML compiler, as parsing is one of the most costly tasks in compilation. In this thesis, I show how I solved this problem. Also, discussed in this thesis is how the new JML compiler (jml4c) was built on the Eclipse platform. To reduce maintainability issues with regards to code base for current and future versions of Java, Eclipse was chosen as the base compiler. The code base to support new Java language constructs can now be then implicitly maintained by the Eclipse team, which is outside the JML community. Most of Level 0 and Level 1 features of JML have been implemented. These are the features that are most commonly used in JML specifications. Almost 3500 JUnit test cases were newly created and run. Test cases and sample files from jmlc were incorporated to check completeness of the implementation. A subset of the DaCapo benchmark was used to test the correctness and performance of the new compiler. Almost 1.5 million lines of code was compiled across 350 packages generating about 5000 class files which shows that the compiler built can be used for practical and industrial purposes.

I observed that my proposed technique or the AST merging technique on an average is about about 1.6 times faster than the double-round strategy of jmlc; overall, jml4c was three times faster than jmlc. I also observed that this speedup increases with increase in lines of source code. As any industrial code or a sample code for which JML specification can be meaningfully used ranges in thousands of lines of code, our proposed technique will benefit from this. The implementation details showed the feasibility of the AST merging technique on the Eclipse platform which included front end support (lexing, parsing, type-checking) and back-end support (RAC generation and code generation). Several new features were also added that was either absent or partially implemented in jmlc. Some of them includes adding support for runtime specification inside inner classes, annotation for the enhanced for loop, support for labeled statements, enhanced error messages and others.

File in pdf