Computer Science Department

Abstracts of 2005 Reports

Technical Report UTEP-CS-05-37b, January 2006

On the Functional Form of Convex Underestimators for Twice Continuously Differentiable Functions

Christodoulos A. Floudas and Vladik Kreinovich

Published in *Optimization Letters*, 2007, Vol. 1, No. 2,
pp. 187-192.

The optimal functional form of convex underestimators for general twice continuously differentiable functions is of major importance in deterministic global optimization. In this paper, we provide new theoretical results that address the classes of optimal functional forms for the convex underestimators. These are derived based on the properties of shift-invariance and sign-invariance.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-37, November 2005

Updated version UTEP-CS-05-37a, January 2006

Towards Optimal Techniques for Solving Global Optimization Problems: Symmetry-Based Approach

Christodoulos A. Floudas and Vladik Kreinovich

Published in: A. Torn
and J. Zilinskas (eds.), *Models and Algorithms for
Global Optimization*, Springer, New York, 2007, pp. 21-42.

Most techniques for solving global optimization problems have parameters that need to be adjusted to the problem or to the class of problems: for example, in gradient methods, we can select different step sizes. When we have a single parameter (or few parameters) to choose, it is possible to empirically try many values and come up with an (almost) optimal value. Thus, in such situations, we can come up with optimal version of the corresponding technique.

In other approaches, e.g., in methods like convex underestimators, instead of selecting the value of single number-valued parameter, we have select the auxiliary function. It is not practically possible to test all possible functions, so it is not easy to come up with an optimal version of the corresponding technique.

In this paper, we show that in many such situations, natural symmetry requirements enable us either to analytically solve the problem of finding the optimal auxiliary function, or at least reduce this problem to the easier-to-solve problem of finding a few parameters.

Original file in pdf and
in Compressed Postscript

updated file in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-36, November 2005

Specifying and Checking Method Call Sequences of Java Programs

Yoonsik Cheon and Ashaveena Perumandla

In a pre and postconditions-style specification, it is difficult to specify the allowed sequences of method calls, referred to as protocols. The protocols are essential properties of reusable object-oriented classes and application frameworks, and the approaches based on the pre and postconditions, such as design by contracts (DBC) and formal behavioral interface specification languages (BISL), are being accepted as a practical and effective tool for describing precise interfaces of (reusable) program modules. We propose a simple extension to the Java Modeling Language (JML), a BISL for Java, to specify protocol properties in an intuitive and concise manner. The key idea of our approach is to separate protocol properties from functional properties written in pre and postconditions and to specify them in a regular expression-like notation. The semantics of our extension is formally defined and provides a foundation for implementing runtime checks. Case studies have been performed to show the effectiveness our approach. We believe that our approach can be adopted for other BISLs.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-35, October 2005

A Fitness Function for Modular Evolutionary Testing of Object-Oriented Programs

Yoonsik Cheon and Myoung Kim

We show that encapsulation of states in object-oriented programs hinders the search for test data using evolutionary testing. In a well-designed object-oriented program the encapsulated or hidden state is accessible only through exported or public methods. As client code is oblivious to the internal state of a server object, no guidance is available to test the client code using evolutionary testing. In particular, it is difficult to determine the fitness or goodness of test data, as it may depend on the hidden internal state. However, evolutionary testing is a promising new approach whose effectiveness has been shown by several researchers. We propose a specification-based fitness function for evolutionary testing of object-oriented programs. Our approach is modular in that fitness value calculation doesn't depend on source code of server classes, thus it still works even if the server implementation is changed or no code is available----which is frequently the case for reusable object-oriented class libraries and frameworks. Our approach works for both black-box and white-box based testing.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-34, October 2005

Updated version UTEP-CS-05-34a, February 2006

Images with Uncertainty: Efficient Algorithms for Shift, Rotation, Scaling, and Registration, and their Applications to Geosciences

C. G. Schiek, R. Araiza, J. M. Hurtado, A. A. Velasco, V. Kreinovich, and V. Sinyansky

Published in: Mike Nachtegael, Dietrich Van der Weken, Etienne E.
Kerre, and Wilfried Philips (eds.), *Soft Computing in Image
Processing: Recent Advances*, Springer Verlag, 2007, pp.
35-64.

In geosciences, we often need to combine two or images of the same area:

- in data fusion, we must combine, e.g., data from satellite images with a radar image
- in analyzing the effect of an earthquake, we must compare the before and after images, etc.

Compared images are often obtained from slightly different angles, from a slightly different position. Therefore, in order to compare these images, we must register them, i.e., find the shift, rotation, and scaling after which these images match the best, and then apply these transformations to the original images.

There exist efficient algorithms for registration and for the corresponding transformations, but these algorithms are only effective when we know these images with high accuracy. In many real-life situation - e.g., when comparing pre- and post-earthquake image - the accuracy with which we determine these images is of the same order of magnitude as the desired different between the compared images. In this paper, we describe how we can generalize the existing techniques to the case of low accuracy images, and how the resulting algorithms can be efficiently applied in geosciences.

Original file in pdf and
in Compressed Postscript

updated file in pdf

Technical Report UTEP-CS-05-33, September 2005

Revised version UTEP-CS-05-33a, October 2006

Entropy Conserving Probability Transforms and the Entailment Principle

Ronald R. Yager and Vladik Kreinovich

Preliminary version published by Iona College as Machine
Intelligence Institute Technical Report MII-2518, September 2005;
revised version MII-2518R published in October 2006; full paper
published in *Fuzzy Sets and Systems*, 2007, Vol. 158, No. 12,
pp. 1397-1405.

Our main result here is the development of a general procedure for transforming some initial probability distribution into a new probability distribution in a way that the resulting distribution has entropy at least as great as the original distribution. A significant aspect of our approach is that it makes use of the Zadeh's entailment principle which is itself a general procedure for going from an initial possibility distribution to a new possibility distribution so that the resulting possibility has an uncertainty at least as great of the original.

Original file in pdf;

updated version in pdf

Technical Report UTEP-CS-05-32, September 2005

Towards a Cross-Platform Microbenchmark Suite for Evaluating Hardware Performance Counter Data

Maria Gabriela Aguilera, Roberto Araiza, Thientam Pham, and Patricia J. Teller

As useful as performance counters are, the meaning of reported aggregate event counts is sometimes questionable. Questions arise due to unanticipated processor behavior, overhead associated with the interface, the granularity of the monitored code, hardware errors, and lack of standards with respect to event definitions. To explore these issues, we are conducting a sequence of studies using carefully crafted microbenchmarks that permit the accurate prediction of event counts and investigation of the differences between hardware-reported and predicted event counts. This paper presents the methodology employed, some of the microbenchmarks developed, and some of the information uncovered to date. The information provided by this work allows application developers to better understand the data provided by hardware performance counters and better utilize it to tune application performance. A goal of this research is to develop a cross-platform microbenchmark suite that can be used by application developers for these purposes. Some of the microbenchmarks in this suite are discussed in the paper.

Technical Report UTEP-CS-05-31, September 2005

Updated version UTEP-CS-05-31a, October 2005

Interval-Based Robust Statistical Techniques for Non-Negative Convex Functions, with Application to Timing Analysis of Computer Chips

Michael Orshansky, Wei-Shen Wang, Martine Ceberio, and Gang Xiang

Published in *Proceedings of the ACM Symposium on Applied
Computing SAC'06*, Dijon, France, April 23-27, 2006,
pp. 1645-1649.

In chip design, one of the main objectives is to decrease its clock cycle. On the design stage, this time is usually estimated by using worst-case (interval) techniques, in which we only use the bounds on the parameters that lead to delays. This analysis does not take into account that the probability of the worst-case values is usually very small; thus, the resulting estimates are over-conservative, leading to unnecessary over-design and under-performance of circuits. If we knew the exact probability distributions of the corresponding parameters, then we could use Monte-Carlo simulations (or the corresponding analytical techniques) to get the desired estimates. In practice, however, we only have partial information about the corresponding distributions, and we want to produce estimates that are valid for all distributions which are consistent with this information.

In this paper, we develop a general technique that allows us, in particular, to provide such estimates for the clock time.

Original file in pdf and
in Compressed Postscript

Updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-30, September 2005

Updated version UTEP-CS-05-30a, February 2006

Detecting Outliers under Interval Uncertainty: A New Algorithm Based on Constraint Satisfaction

Evgeny Dantsin, Alexander Wolpert, Martine Ceberio, Gang Xiang, and Vladik Kreinovich

Published in the *Proceedings
of the International Conference on Information Processing and
Management of Uncertainty in Knowledge-Based Systems IPMU'06*,
Paris, France, July 2-7, 2006, pp. 802-809.

In many application areas, it is important to detect outliers. The traditional engineering approach to outlier detection is that we start with some "normal" values x1,...,xn, compute the sample average E, the sample standard deviation sigma, and then mark a value x as an outlier if x is outside the k0-sigma interval [E-k0*sigma,E+k0*sigma] (for some pre-selected parameter k0). In real life, we often have only interval ranges [xi-,xi+] for the normal values x1,...,xn. In this case, we only have intervals of possible values for the bounds L=E-k0*sigma and U=E+k0*sigma. We can therefore identify outliers as values that are outside all k0-sigma intervals, i.e., values which are outside the interval [L-,U+]. In general, the problem of computing L- and U+ is NP-hard; a polynomial-time algorithm is known for the case when the measurements are sufficiently accurate, i.e., when "narrowed" intervals do not intersect with each other. In this paper, we use constraint satisfaction to show that we can efficiently compute L- and U+ under a weaker (and more general) condition that neither of the narrowed intervals is a proper subinterval of another narrowed interval.

Original file in pdf and
in Compressed Postscript;

Updated version in pdf and
in Compressed Postscript;

Technical Report UTEP-CS-05-29, August 2005

Discrete Conservation of Nonnegativity for Elliptic Problems Solved by the hp-FEM

Pavel Solin, Tomas Vejchodsky, and Roberto Araiza

Published in *Mathematics and Computers in Simulation*, 2007,
Vol. 76, pp. 205-210.

Most results related to discrete nonnegativity conservation principles (DNCP) for elliptic problems are limited to finite differences (FDM) and lowest-order finite element methods (FEM). In this paper we confirm that a straightforward extension to higher-order finite element methods (hp-FEM) in the classical sense is not possible. We formulate a weaker DNCP for the Poisson equation in one spatial dimension and prove it using an interval computing technique. Numerical experiments related to the extension of this result to 2D are presented.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-28, August 2005

Revised version UTEP-CS-05-28a, November 2005

Population Variance under Interval Uncertainty: A New Algorithm

Evgeny Dantsin, Vladik Kreinovich, Alexander Wolpert, and Gang Xiang

Published in *Reliable Computing*, 2006, Vol. 12, No. 4,
pp. 273-280.

In statistical analysis of measurement results, it is often beneficial to compute the range [V] of the population variance V when we only know the intervals [Xi-Di,Xi+Di] of possible values of xi. In general, this problem is NP-hard; a polynomial-time algorithm is known for the case when the measurements are sufficiently accurate, i.e., when |Xi-Xj| >= (D_i+D_j)/n for all i =/= j. In this paper, we show that we can efficiently compute [V} under a weaker (and more general) condition |Xi-Xj| >= |D_i-D_j|/n.

Original file in pdf and
in Compressed Postscript

revised version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-27, August 2005

Updated version UTEP-CS-05-27a, October 2005

Decision Making Beyond Arrow's "Impossibility Theorem", with the Analysis of Effects of Collusion and Mutual Attraction

Hung T. Nguyen, Olga Kosheleva and Vladik Kreinovich

Short version published in the *Proceedings of the Sixth International Conference
on Intelligent
Technologies InTech'05*, Phuket Island, Thailand, December 14-16,
2005, pp. 43-52; detailed version published in
*International Journal of Intelligent Systems*, 2009,
Vol. 24, No. 1, pp. 27-47.

In 1951, K. J. Arrow proved that, under certain assumptions, it is impossible to have group decision making rules which satisfy reasonable conditions like symmetry. This Impossibility Theorem is often cited as a proof that reasonable group decision making is impossible.

We start our paper by remarking that Arrow's result only covers the situations when the only information we have about individual preferences is their binary preferences between the alternatives. If we follow the main ideas of modern decision making and game theory and also collect information about the preferences between lotteries (i.e., collect the utility values of different alternatives), then reasonable decision making rules are possible: e.g., Nash's rule in which we select an alternative for which the product of utilities is the largest possible.

We also deal with two related issues: how we can detect individual preferences if all we have is preferences of a subgroup, and how we take into account mutual attraction between participants.

Original file in pdf and
in Compressed Postscript

updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-26a, September 2005

Updated version UTEP-CS-05-26b, October 2005

Quantum Versions of K-CSP Algorithms: A First Step Towards Quantum Algorithms for Interval-Related Constraint Satisfaction Problems

Evgeny Dantsin, Alexander Wolpert, and Vladik Kreinovich

Published in *Proceedings of the ACM Symposium on Applied
Computing SAC'06*, Dijon, France, April 23-27, 2006,
pp. 1640-1644.

In data processing, we input the results Xi of measuring easy-to-measure quantities xi and use these results to find estimates Y=f(X1,...,Xn) for difficult-to-measure quantities y which are related to xi by a known relation y=f(x1,...,xn). Due to measurement inaccuracy, the measured values Xi are, in general, different from the (unknown) actual values xi of the measured quantities, hence the result Y of data processing is different from the actual value of the quantity y.

In many practical situations, we only know the bounds Di on the measurement errors dxi=X-i-xi. In such situations, we only know that the actual value xi belongs to the interval [Xi-Di,Xi+Di], and we want to know the range of possible values of y. The corresponding problems of interval computations are NP-hard, so solving these problems may take an unrealistically long time. One way to speed up computations is to use quantum computing, and quantum versions of interval computations algorithms have indeed been developed.

In many practical situations, we also know some constraints on the possible values of the directly measured quantities 1,...,xn. In such situations, we must combine interval techniques with constraint satisfaction techniques. It is therefore desirable to extend quantum interval algorithms to such combinations. As a first step towards this combination, in this paper, we consider quantum algorithms for discrete constraint satisfaction problems.

Original file in pdf and
in Compressed Postscript

updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-26, August 2005

On Quantum Versions of Record-Breaking Algorithms for SAT

Evgeny Dantsin, Vladik Kreinovich, and Alexander Wolpert

Published in *ACM SIGACT News*, 2005, Vol. 36, No. 4,
pp. 103-108.

It is well known that a straightforward application of Grover's quantum search algorithm enables to solve SAT in O(2^(n/2)) steps. Ambainis (SIGACT News, 2004) observed that it is possible to use Grover's technique to similarly speed up a sophisticated algorithm for solving 3-SAT. In this note, we show that a similar speed up can be obtained for all major record-breaking algorithms for satisfiability. We also show that if we use Grover's technique only, then we cannot do better than quadratic speed up.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-25c, April 2007

Towards Interval Techniques for Processing Educational Data

Olga Kosheleva, Vladik Kreinovich, Luc Longpre, Mourat Tchoshanov, and Gang Xiang

Published in *IEEE Proceedings of the 12th
GAMM-IMACS International Symposium on Scientific Computing,
Computer Arithmetic and Validated Numerics*, Duisburg, Germany,
September 26-29, 2006.

There are many papers that experimentally compare effectiveness of different teaching techniques. Most of these papers use traditional statistical approach to process the experimental results. The traditional statistical approach is well suited to numerical data but often, what we are processing is intervals (e.g., A means anything from 90 to 100). We show that the use of interval techniques leads to more adequate processing of educational data.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-25, July 2005

Processing Educational Data: From Traditional Statistical Techniques to an Appropriate Combination of Probabilistic, Interval, and Fuzzy Approaches

Olga M. Kosheleva and Martine Ceberio

Published in *Proceedings of the International Conference on Fuzzy Systems,
Neural Networks, and Genetic Algorithms FNG'05*, Tijuana, Mexico,
October 13-14, 2005, pp. 39-48.

There are many papers that experimentally compare effectiveness of different teaching techniques. Most of these papers use traditional statistical approach to process the experimental results. The traditional statistical approach is well suited to numerical data but often, what we are processing is either intervals (e.g., A means anything from 90 to 100) or fuzzy-type perceptions, words from the natural language like "understood well" or "understood reasonably well". We show that the use of intervals and fuzzy techniques leads to more adequate processing of educational data.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-24, June 2005

Updated version UTEP-CS-05-24a, October 2005

Computing Mean and Variance under Dempster-Shafer Uncertainty: Towards Faster Algorithms

Vladik Kreinovich, Gang Xiang, and Scott Ferson

Published in *International Journal of Approximate Reasoning*,
Vladik Kreinovich, 2006, Vol. 42, pp. 212-227.

In many real-life situations, we only have partial information about the actual probability distribution. For example, under Dempster-Shafer uncertainty, we only know the masses m1,...,mn assigned to different sets S1,...,Sn, but we do not know the distribution within each set Si. Because of this uncertainty, there are many possible probability distributions consistent with our knowledge; different distributions have, in general, different values of standard statistical characteristics such as mean and variance. It is therefore desirable, given a Dempster-Shafer knowledge base, to compute the ranges of possible values of mean E and of variance V.

In their recent paper, A. T. Langewisch and F. F. Choobineh show how to compute these ranges in polynomial time. In particular, they reduce the problem of computing the range for V to the problem of minimizing a convex quadratic function, a problem which can be solved in time O(n^2 log(n)). We show that the corresponding quadratic optimization problem can be actually solved faster, in time O(n log(n)); thus, we can compute the range of V in time O(n log(n)).

Original file in pdf and
in Compressed Postscript;

updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-23, June 2005

Some Usability Issues and Research Priorities in Spoken Dialog Applications

Nigel G. Ward, Anais G. Rivera, Karen Ward and David G. Novick

As a priority-setting exercise, we examined interactions between users and a simple spoken dialog system in comparison to interactions with a human operator. Based on analysis of the observed usability differences and their root causes we propose seven priority issues for spoken dialog systems research.

Technical Report UTEP-CS-05-22, June 2005

Consortium of CISE-MII Funded Institutions: Initial Recommendations on Broadening Participation of Hispanics

Ann Q. Gates

Technical Report UTEP-CS-05-21, June 2005

If an Exact Interval Computation Problem is NP-Hard, Then the Approximate Problem is Also NP-Hard: A Meta-Result

Aline B. Loreto, Laira V. Toscani, Leila Robeiro, Dalcidio M. Claudio, Liara S. Leal, Luc Longpre, and Vladik Kreinovich

In interval computations, usually, once we prove that a problem of computing the exact range is NP-hard, then it later turns out that the problem of computing this range with a given accuracy is also NP-hard. In this paper, we provide a general explanation for this phenomenon.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-20, June 2005

Kolmogorov Complexity Leads to a Representation Theorem for Idempotent Probabilities (Sigma-Maxitive Measures)

Vladik Kreinovich and Luc Longpre

Published in *ACM SIGACT News*, 2005, Vol. 36,
No. 3, pp. 107-112.

In many application areas, it is important to consider maxitive measures (idempotent probabilities), i.e., mappings m for which m(A U B)=max(m(A),m(B)). In his papers, J. H. Lutz has used Kolmogorov complexity to show that for constructively defined sets A, one maxitive measure - fractal dimension - can be represented as m(A)= sup{f(x): x in A}. We show that a similar representation is possible for an arbitrary maxitive measure.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-19, April 2005

Updated version UTEP-CS-05-19a, November 2005

Updated version UTEP-CS-05-19b, March 2006

Using Expert Knowledge in Solving the Seismic Inverse Problem

Matthew G. Averill, Kate C. Miller, G. Randy Keller, Vladik Kreinovich, Roberto Araiza, and Scott A. Starks

First (shorter) version published in *Proceedings of the 24th
International Conference of the North American Fuzzy Information
Processing Society NAFIPS'2005*, Ann Arbor, Michigan, June 22-25,
2005, pp. 310-314; full paper published in *International Journal
of Approximate Reasoning*, 2007, Vol. 45, No. 3, pp. 564-587.

For many practical applications, it it important to solve the seismic inverse problem, i.e., to measure seismic travel times and reconstruct velocities at different depths from this data. The existing algorithms for solving the seismic inverse problem often take too long and/or produce un-physical results -- because they do not take into account the knowledge of geophysicist experts. In this paper, we analyze how expert knowledge can be used in solving the seismic inverse problem.

Short file in pdf and
in Compressed Postscript

first updated version in pdf and
in Compressed Postscript

second updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-18, April 2005

How to Reconstruct the Original Shape of a Radar Signal?

Matthew G. Averill, Gang Xiang, Vladik Kreinovich, G. Randy Keller, Scott A. Starks, Patrick S. Debroux, and James Boehm

Published in *Proceedings of the 24th International Conference of the
North American Fuzzy Information Processing Society NAFIPS'2005*,
Ann Arbor, Michigan, June 22-25, 2005, pp. 717-721.

The shape of the radar signal can provide us with the additional information about the reflecting surface. However, to decrease the noise, radars use filtering, and filtering changes the shapes of the radar signal. It is therefore necessary to reconstruct the original shape of the radar signal.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-17, April 2005

How the Concept of Information as Average Number of "Yes"-"No" Questions (Bits) Can Be Extended to Intervals, P-Boxes, and More General Uncertainty

Vladik Kreinovich, Gang Xiang, and Scott Ferson

Published in *Proceedings of the 24th International Conference of the
North American Fuzzy Information Processing Society NAFIPS'2005*,
Ann Arbor, Michigan, June 22-25, 2005, pp. 80-85.

We explain how the concept of information as average number of "yes"-"no" questions (bits) can be extended to intervals, p-boxes, and more general uncertainty.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-16a, June 2005

Kaluza-Klein 5D Ideas Made Fully Geometric

Scott A. Starks, Olga Kosheleva, and Vladik Kreinovich

Published in *International Journal of Theoretical Physics*,
2006, Vol. 45, No. 3, pp. 589-601.

After the 1916 success of General relativity that explained gravity by adding time as a fourth dimension, physicists have been trying to explain other physical fields by adding extra dimensions. In 1921, Kaluza and Klein has shown that under certain conditions like cylindricity (dg_{ij}/dx^5=0), the addition of the 5th dimension can explain the electromagnetic field. The problem with this approach is that while the model itself is geometric, conditions like cylindricity are not geometric. This problem was partly solved by Einstein and Bergman who proposed, in their 1938 paper, that the 5th dimension is compactified into a small circle S^1 so that in the resulting cylindric 5D space-time R^4 X S^1 the dependence on x^5 is not macroscopically noticeable. We show that if, in all definitions of vectors, tensors, etc., we replace R^4 with R^4 X S^1, then conditions like cylindricity automatically follow - i.e., these conditions become fully geometric.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-16, April 2005

From Fuzzification and Intervalization to Anglification: A New 5D Geometric Formalism for Physics and Data Processing

Scott A. Starks and Vladik Kreinovich

Published in

We show that in understanding foundations of modern physics, with its 10-dimensional (and higher-dimensional) space-time models, it is very helpful to use the main ideas behind fuzzification -- extension of arithmetic operations and elementary functions from exact numbers to fuzzy numbers. The resulting formalism is, from the mathematical viewpoint, somewhat more complex than the traditional fuzzy arithmetic, but it is still much simpler than the quantum field theory -- and thus, it helps to make several important ideas from foundations of modern physics much more intuitively clear.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-15, April 2005

Use of Maxitive (Possibility) Measures in Foundations of Physics and Description of Randomness: Case Study

A. M. Finkelstein, O. Kosheleva, V. Kreinovich, S. A. Starks, H. T. Nguyen

Published in

According to the traditional probability theory, events with a positive but very small probability can occur (although very rarely). For example, from the purely mathematical viewpoint, it is possible that the thermal motion of all the molecules in a coffee cup goes in the same direction, so this cup will start lifting up.

In contrast, physicists believe that events with extremely small probability cannot occur. In this paper, we show that to get a consistent formalization of this belief, we need, in addition to the original probability measure, to also consider a maxitive (possibility) measure.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-14, April 2005

Supporting Documentation for the SPS-Prospec Case Study

Salamah I. Salamah and Ann Q. Gates

In this work, we report on the results of a case study comparing the correctness of Linear Temporal Logic (LTL)formulas generated by the Property Specification Tool Prospec and the Specification Pattern System (SPS). The report includes all the components used in the case study. In addition, this report provides a description of the use of the SPIN model checker to verify correctness of LTL specifications. Particularly, the report provides screenshots of XSPIN (SPIN’s graphical interface) and how properties (i.e., LTL formulas) can be specified and verified.

Technical Report UTEP-CS-05-13, April 2005

Updated version UTEP-CS-05-13b, February 2006

Ellipsoids and Ellipsoid-Shaped Fuzzy Sets as Natural Multi-Variate Generalization of Intervals and Fuzzy Numbers: How to Elicit them from Users, and How to Use them in Data Processing

Vladik Kreinovich, Jan Beck, and Hung T. Nguyen

Published in *Information Sciences*, 2007, Vol. 177, No. 2, pp.
388-407.

In this paper, we show that ellipsoids are natural multi-variate generalization of intervals and ellipsoid-shaped fuzzy sets are a natural generalization of fuzzy numbers. We explain how to elicit them from users, and how to use them in data processing.

File in pdf and
in Compressed Postscript

updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-12, April 2005

Interval Methods: An Introduction

Luke Achenie, Vladik Kreinovich, and Kaj Madsen

Published in: Jack Dongarra, Kaj Madsen, and Jerzy Wasniewski (eds.),
*PARA'04 Workshop on State-of-the-Art in Scientific Computing*,
Springer Lecture Notes in Computer Science,
2006, Vol. 3732, pp. 53-56.

The ongoing development of ever more advanced computers provides the potential for solving increasingly difficult computational problems. However, given the complexity of modern computer architectures, the task of realizing this potential needs careful attention. A main concern of High Performance Computing is the development of software that optimizes the performance of a given computer.

An important characteristic of the computer performance in scientific
computing is the accuracy of the computation results. Often, we can
estimate this accuracy by using traditional statistical techniques.
However, in many
practical situations, we do not know the probability distributions
of different measurement, estimation, and/or roundoff errors,
we only know estimates of the upper bounds on the corresponding measurement
errors, i.e., we only know an *interval* of possible values of
each such error. We
describe the corresponding "interval computation" techniques,
and the applications of these techniques to various problems of
scientific computing.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-11, March 2005

NSF Advance: Institutional Transformation for Faculty Diversity - Faculty Worklife Survey Results

Manuela Romero and Ann Q. Gates

The University of Texas at El Paso (UTEP) received an NSF ADVANCE grant in October 2003 to create an initiative for institutional change with the goal of serving as a model for other institutions that desire to increase the representation and advancement of women, including underrepresented minorities, in academic science and engineering careers. In the first year of the grant, co-PI's Ann Gates and Patricia Witherspoon worked with the ADVANCE Program Evaluator, Manuela Romero, to create an instrument to survey faculty work life at UTEP. The instrument is based on the "Study of Faculty Work Life" survey instrument that was developed by the Women in Science and Engineering Leadership Institute at the University of Wisconsin Madison. The ADVANCE team administered the survey in spring 2004 to full-time faculty in the 18 NSF-designated departments. The survey establishes a baseline of the issues concerning recruitment, retention and departmental, college and overall university climate and culture. This report presents the results of the survey.

Technical Report UTEP-CS-05-10, March 2005

A Contextual Interpretation of Undefinedness for Runtime Assertion Checking

Yoonsik Cheon and Gary T. Leavens

Runtime assertion checkers and static checking and verification tools must all cope with the well-known undefinedness problem of logic. This problem is particularly severe for runtime assertion checkers, since, in addition to the possibility of exceptions and errors, runtime assertion checkers must cope with non-executable expressions (such as certain quantified expressions). This paper describes how the runtime assertion checker of the Java Modeling Language (JML) copes with undefinedness. JML is interesting because it attempts to satisfy the needs of a wide range of tools; besides runtime assertion checking, these include static checking tools (like ESC/Java) and static verification tools. These other tools use theorem provers that are based on standard (two-valued) logic and hence use the underspecified total functions semantics for assertions. That semantics validates all the rules of standard logic by substituting an arbitrary value of the appropriate type for each undefined subexpression. JML's runtime assertion checker implements this semantics, and also deals with non-executable expressions, in a way that is both simple and practical. The technique implemented selects a value for undefined subexpressions depending on the context in which the undefinedness occurs. This technique enables JML's runtime assertion checker to be consistent with the other JML tools and to fulfill its role as a practical and effective means of debugging code and specifications.

File in PDF and in Compressed Postscript

Technical Report UTEP-CS-05-09, March 2005

Updated version UTEP-CS-05-09a, July 2005

Revised version UTEP-CS-05-09c, November 2005

Combining Interval, Probabilistic, and Fuzzy Uncertainty: Foundations, Algorithms, Challenges -- An Overview

Vladik Kreinovich, Daniel J. Berleant, Scott Ferson, and Weldon A. Lodwick

Published in *Proceedings of the International Conference on Fuzzy Systems,
Neural Networks, and Genetic Algorithms FNG'05*, Tijuana, Mexico,
October 13-14, 2005, pp. 1-10.

Since the 1960s, many algorithms have been designed to deal with interval uncertainty. In the last decade, there has been a lot of progress in extending these algorithms to the case when we have a combination of interval and probabilistic uncertainty. We provide an overview of related algorithms, results, and remaining open problems.

Original File in pdf and
in Compressed Postscript

Updated version in pdf and
in Compressed Postscript

Revised version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-08, March 2005

Revised version UTEP-CS-05-08a, June 2005

Which Fuzzy Logic is the Best: Pragmatic Approach (and its Theoretical Analysis)

Vladik Kreinovich and Hung T. Nguyen

Published in *Fuzzy Sets and Systems*, 2006, Vol. 157,
pp. 611-614.

In this position paper, we argue that when we are looking for the best fuzzy logic, we should specify in what sense the best, and that we get different fuzzy logics as ``the best'' depending on what optimality criterion we use.

Original file in pdf and
in Compressed Postscript

Revised version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-07, February 2005

On Inverse Halftoning: Computational Complexity and Interval Computations

S. D. Cabrera, K. Iyer, G. Xiang, and V. Kreinovich

Published in *Proceedings of the 39th Conference on Information
Sciences and Systems CISS'2005*,
John Hopkins University, March 16-18, 2005, Paper 164.

We analyze the problem of inverse half-toning. This problem is a particular case of a class of difficult-to-solve problems: inverse problems for reconstructing piece-wise smooth images. We show that this general problem is NP-hard. We also propose a new idea for solving problems of this type, including the inverse halftoning problem.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-06, March 2005

Supporting Documentation for the 2003 SPS-Prospec Experiment

Oscar Mondragon and Salamah Salamah

In spring of 2003 an empirical study was conducted to compare the effectiveness of the Prospec tool with the Specification Pattern System (SPS). The objective of the experiment was to determine the effect that Prospec and SPS have over the completeness and correctness of the generated software property specifications. The purpose of this document is to present the material that was used during the experiment, and to document how the classification of the patterns and scopes of each trial was validated.

Technical Report UTEP-CS-05-05, February 2005

A Complete Automation of Unit Testing for Java Programs

Yoonsik Cheon, Myoung Yee Kim, and Ashaveena Perumandla

Program testing is expensive and labor-intensive, often consuming more than half of the total development costs, and yet it is frequently not done well and the results are not always satisfactory. However, testing is the primary method to ensure that programs comply with requirements. We describe our on-going project that attempts to completely automate unit testing of object-oriented programs. Our project investigates the use of an evolutionary approach, called genetic algorithms, for the test data generation and the use of program specifications, written in JML, for the test result determination. A proof-of-concept tool has been implemented and shows that a complete automation is feasible for unit testing Java programs. Automated testing techniques such as ours can complement manual testing by testing significant portion of object-oriented programs, as methods in object-oriented programs tend to be small; manual testing can focus more interesting problems, e.g., inter-class testing.

File in PDF and in Compressed Postscript

Technical Report UTEP-CS-05-04, February 2005

Specifying and Checking Method Call Sequences in JML

Yoonsik Cheon and Ashaveena Perumandla

In a pre- and post-conditions style specification, it is difficult to specify allowed sequences of method calls, often called protocols. However, the protocols are essential properties of reusable object-oriented classes and application frameworks, and the approaches based on the pre- and post-conditions, such as design by contracts (DBC) and formal behavioral interface specification languages (BISL), are being accepted as a practical and effective way of describing precise interfaces of (reusable) program modules. We propose a simple extension to JML, a BISL for Java, to specify protocol properties in an intuitive and concise manner. We also define a formal semantics of our extension and provide runtime checks. We believe that our approach can be easily adopted for other BISLs.

File in PDF and in Compressed Postscript

Technical Report UTEP-CS-05-03, January 2005

A Universal Sensor Model

Valery Mazin and Vladik Kreinovich

Published in *Proceedings of the 12th International Conference
Sensor'2005,* Nuremberg, Germany, May 10-12, 2005, pp. 317-322.

File in pdf and in Compressed Postscript

Technical Report UTEP-CS-05-02, January 2005

Final version UTEP-05-02a, September 2005

Optimization and Decision Making under Interval and Fuzzy Uncertainty: Towards New Mathematical Foundations

Hung T. Nguyen and Vladik Kreinovich

Published in: Cengiz Kahraman (ed.), Fuzzy Applications in Industrial Engineering, Springer Verlag, Berlin/Heidelberg, 2006, pp. 275-290.

In many industrial engineering problems, we must select a design,
select parameters of a process, or, in general, make a decision.
Informally, this decision must be *optimal*, the best for the
users. In traditional operations research, we assume that we know
the *objective function* f(x) whose values describe the
consequence of a decision x for the user. Optimization of
well-defined functions is what started calculus in the first
place: once we know the objective function f(x), we can use
differentiation to find its maximum, e.g., as the point x at
which the derivative of f with respect to x is equal to 0.

In real life, we often do not know the exact consequence f(x) of each possible decision x, we only know this consequence with uncertainty. The simplest case is when we have tolerance-type (interval) uncertainty, i.e., when all we know is that the deviation between the the actual (unknown) value f(x) and the approximate (known) value F(x) cannot exceed the (known) bound D(x). In precise terms, this means that f(x) belongs to the interval [F(x)-D(x),F(x)+D(x)]. In other situations, in addition to the interval that is guaranteed to contain f(x), experts can also provide us with narrower intervals that contain f(x) with certain degree of confidence alpha; such a nested family of intervals is equivalent to a more traditional definition of fuzzy set. To solve the corresponding optimization problems, in this paper, we extend differentiation formalisms to the cases of interval and fuzzy uncertainty.

Original file in pdf and
in Compressed Postscript;

updated version in pdf and
in Compressed Postscript

Technical Report UTEP-CS-05-01, January 2005

Revised version UTEP-CS-05-01a, February 2009

Egyptian Fractions Revisited

Olga Kosheleva and Vladik Kreinovich

Published in *Informatics in Education*, 2009, Vol. 8, No. 1, pp.
35-48.

It is well known that the ancient Egyptians represented each fraction as a sum of unit fractions -- i.e., fractions with unit numerators; this is how they, e.g., divided loaves of bread. What is not clear is why they used this representation. In this paper, we propose a new explanation: crudely speaking, that the main idea behind the Egyptian fractions provides an optimal way of dividing the loaves. We also analyze the related properties of fractions.

Original file in pdf and
in Compressed Postscript;

updated version in pdf and
in Compressed Postscript