University of Texas at El Paso
Computer Science Department
Abstracts of 2003 Reports


Technical Report UTEP-CS-03-29, November 2003
Minimality of Solution Update in Conflict Resolution: An Application of Revision Programming to von Neumann-Morgenstern Approach
Inna Pivkina and Vladik Kreinovich

Published in International Journal of Intelligent Systems, 2005, Vol. 20, No. 9, pp. 939-956.

In a 1944 book that started game theory (and mathematical approach to conflict resolution), von Neumann and Morgenstern proposed the notion of a solution. When the situation changes, the old solution is often no longer a solution, so it needs to be updated. In practical applications, it is usually desirable to keep the solution change "minimal" in some reasonable sense. We show that for a seemingly straightforward formalization of this minimality, checking whether a change is minimal is NP-hard. We also show that by representing the notion of a solution as a collection of revision rules, we can produce a reasonable notion of minimality for which there exists a feasible algorithm for checking minimality of update.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-28, October 2003
Updated version UTEP-CS-03-28a, July 2004
Beyond Convex? Global Optimization is Feasible Only for Convex Objective Functions: A Theorem
R. Baker Kearfott and Vladik Kreinovich

Published in Journal of Global Optimization, 2005, Vol. 33, No. 4, pp. 617-624.

It is known that there are feasible algorithms for minimizing convex functions, and that for general functions, global minimization is a difficult (NP-hard) problem. It is reasonable to ask whether there exists a class of functions that is larger than the class of all convex functions for which we can still solve the corresponding minimization problems feasibly. In this paper, we prove, in essence, that no such more general class exists. In other words, we prove that global optimization is always feasible only for convex objective functions.

File in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-27, October 2003
Fast Multiplication of Interval Matrices (Interval Version of Strassen's Algorithm)
Martine Ceberio and Vladik Kreinovich

Published in Reliable Computing, 2004, Vol. 10, No. 3, pp. 241-243.

Strassen's algorithm multiplies two numerical matrices fast, but when applied to interval matrices, leads to excess width. We use Rump's interval arithmetic to propose an interval version of Strassen's algorithm whose only excess width is in second order terms.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-26, October 2003
Greedy Algorithms for Optimizing Multivariate Horner Schemes
Martine Ceberio and Vladik Kreinovich

Published in: ACM SIGSAM Bulletin, 2004, Vol. 38, No. 1 (147), pp. 8-15.

For univariate polynomials f(x1), Horner scheme provides the fastest way to compute the value. For multivariate polynomials, several different version of Horner scheme are possible; it is not clear which of them is optimal. In this paper, we propose a greedy algorithm that will hopefully lead to good computation times.

A univariate Horner scheme has another advantage: if the value x1 is known with uncertainty, and we are interested in the resulting uncertainty in f(x1), then Horner scheme leads to a better estimate for this uncertainty than many other ways of computing f(x1). The second greedy algorithm that we propose tries to find the multivariate Horner scheme that leads to the best estimate for the uncertainty in f(x1,...,xn).

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-25, August 2003
Updated version UTEP-CS-03-25a, October 2003
Sensitivity Analysis of Neural Control
Chin-Wang Tao, Hung T. Nguyen, J. T. Yao, and Vladik Kreinovich

Published in Proceedings of the International Conference on Information Technology InTech'03, Chiang Mai, Thailand, December 17-19, 2003, pp. 478-482.

We provide explicit formulas that describe how sensitive the resulting signal of a neural network is to the measurement errors with which we measure the inputs.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Version UTEP-CS-03-24b, January 2004
Final version UTEP-CS-03-24d, May 2004
On-Line Algorithms for Computing Mean and Variance of Interval Data, and Their Use in Intelligent Systems
Berlin Wu, Hung T. Nguyen, and Vladik Kreinovich

Published in Information Sciences, 2007, Vol. 177, No. 16, pp. 3228-3238.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-24, August 2003
Updated version UTEP-CS-03-24a, October 2003
Real-Time Algorithms for Statistical Analysis of Interval Data
Berlin Wu, Hung T. Nguyen, and Vladik Kreinovich

Published in Proceedings of the International Conference on Information Technology InTech'03, Chiang Mai, Thailand, December 17-19, 2003, pp. 483-490.

When we have only interval ranges [xi] of sample values x1,...,xn, what is the interval [V] of possible values for the variance V of these values? There are quadratic time algorithms for computing the exact lower bound V- on the variance of interval data, and for computing V+ under reasonable easily verifiable conditions. The problem is that in real life, we often make additional measurements. In traditional statistics, if we have a new measurement result, we can modify the value of variance in constant time. In contrast, previously known algorithms for processing interval data required that, once a new data point is added, we start from the very beginning. In this paper, we describe new algorithms for statistical processing of interval data, algorithms in which adding a data point requires only O(n) computational steps.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-23, August 2003
Updated version UTEP-CS-03-23a, October 2003
A New Differential Formalism for Interval-Valued Functions and its Potential Use in Detecting 1-D Landscape Features
Vladik Kreinovich, Hung T. Nguyen, Gracaliz Pereira Dimuro, Antonio Carlos da Rocha Costa, and Benjamin Rene Callejas Bedregal

Published in Proceedings of the International Conference on Information Technology InTech'03, Chiang Mai, Thailand, December 17-19, 2003, pp. 491-498.

In many practical problems, it is important to know the slope (derivative) dy/dx of one quantity y with respect to some other quantity x. For example, different 1-D landscape features can be characterized by different values of the derivative dy/dx, where y is an altitude, and x is a horizontal coordinate. In practice, we often know the values of y(x) for different x with interval uncertainty. How can we then find the set of possible values of the slope? In this paper, we formulate this problem of differentiating interval-values functions in precise terms, and we describe an (asymptotically) optimal algorithm for computing the corresponding derivative.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-22c, April 2004
Fast Quantum Algorithms for Handling Probabilistic and Interval Uncertainty
Vladik Kreinovich and Luc Longpre

Published in Mathematical Logic Quarterly, 2004, Vol. 50, No. 4/5, pp. 507-518.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-22, July 2003
Computational Complexity and Feasibility of Data Processing and Interval Computations, with Extension to Cases When We Have Partial Information about Probabilities
Vladik Kreinovich and Luc Longpre

Published in: Vasco Brattka, Matthias Schroeder, Klaus Weihrauch, and Ning Zhong, Proceedings of the Conference on Computability and Complexity in Analysis CCA'2003, Cincinnati, Ohio, USA, August 28-30, 2003, pp. 19-54.

In many real-life situations, we are interested in the value of a physical quantity y that is difficult or impossible to measure directly. To estimate y, we find some easier-to-measure quantities x1,...,xn which are related to y by a known relation y=f(x1,...,xn). Measurements are never 100% accurate; hence, the measured values Xi are different from xi, and the resulting estimate Y=f(X1,...,Xn) is different from the desired value y=f(x1,...,xn). How different?

Traditional engineering to error estimation in data processing assumes that we know the probabilities of different measurement error dxi=Xi-xi. In many practical situations, we only know the upper bound Di for this error; hence, after the measurement, the only information that we have about xi is that it belongs to the interval [xi]={Xi-Di,Xi+Di]. In this case, it is important to find the range [y] of all possible values of y=f(x1,...,xn) when each xi is in [xi]. We start the paper with a brief overview of the computational complexity of the corresponding interval computation problems.

We then discuss how this computational complexity changes when, in addition to the upper bounds Di, we have some partial information about the probabilities of different values of di.

We also show how the use of quantum computing can speed up some computations related to interval and probabilistic uncertainty.

Most of the related problems turn out to be, in general, at least NP-hard. We end the paper with speculations on whether (and how) hypothetic physical devices can compute NP-hard problems faster than in exponential time.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-21, July 2003
Updated version UTEP-CS-03-21a, October 2003
Separating Components in Interval-Valued Images
Marilton Sanchotene de Aguiar, Gracaliz Pereira Dimuro, Antonio Carlos da Rocha Costa, Andrei Finkelstein, and Vladik Kreinovich

In many applications of imaging, we would like to know whether we have an image of a single-component object or an image of an object that consists of several components. Many algorithms have been designed to solve this problem; however, these algorithms are all heuristic. Often, according to some reasonable methods, we have a single component, while according to some other equally reasonable methods, the same image have multiple components. It is desirable to produce reliable methods, so that if a method claims that there are multiple components, then it should mean that the observed data is incompatible with the assumption that there is only one component. At present, there exist reliable methods for separating components in a (interval-valued) 1D source. In this paper, we develop an efficient algorithm for separating components in a general (interval-valued) 2D source.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-20, July 2003
Updated version UTEP-CS-03-20a, July 2004
Computing 2-Step Predictions for Interval-Valued Finite Stationary Markov Chains
Marcilia Andrade Campos, Gracaliz Pereira Dimuro, Antonio Carlos da Rocha Costa, and Vladik Kreinovich

Markov chains are a useful tool for solving practical problems. In many real-life situations, we do not know the exact values of initial and transition probabilities; instead, we only know the intervals of possible values of these probabilities. Such interval-valued Markov chains were considered and analyzed by I. O. Kozine and L. V. Utkin in their Reliable Computing paper. In their paper, they propose an efficient algorithm for computing interval-valued probabilities of the future states. For the general case of non-stationary Markov chains, their algorithm leads to the exact intervals for the probabilities of future states.

In the important case of stationary Markov chains, we can still apply their algorithm. For stationary chains, 1-step predictions are exact but 2-step predictions sometimes lead to intervals that are wider than desired. In this paper, we describe a modification of Kozine-Utkin algorithm that always produces exact 2-step predictions for stationary Markov chains.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-19, June 2003
Updated version UTEP-CS-03-19a, October 2003
Novel Approaches to Numerical Software with Result Verification
Laurent Granvilliers, Vladik Kreinovich, and Norbert Mueller

Published in: Rene Alt, Andreas Frommer, R. Baker Kearfott, and Wolfram Luther (eds.), Numerical Software with Result Verification, International Dagstuhl Seminar, Dagstuhl Castle, Germany, January 19-24, 2003, Revised Papers, Springer Lectures Notes in Computer Science, 2004, Vol. 2991, pp. 274-305.

Traditional design of numerical software with result verification is based on the assumption that we know the algorithm f(x_1,...,xn) that transforms input x1,...,xn into the output y=f(x1,...,xn), and we know the intervals of possible values of the inputs. Many real-life problems go beyond this paradigm. In some cases, we do not have an algorithm f, we only know some relation (constraints) between xi and y. In other cases, in addition to knowing the intervals [xi], we may know some relations between xi; we may have some information about the probabilities of different values of xi, and we may know the exact values of some of the inputs (e.g., we may know that x1=pi/2).

In this paper, we describe the approaches for solving these real-life problems. In Section 2, we describe interval consistency techniques related to handling constraints; in Section 3, we describe techniques that take probabilistic information into consideration, and in Section 4, we overview techniques for processing exact real numbers.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-18, June 2003
Determination of Properties of Composite Materials from the Lamb Wave Propagation: Probabilistic, Interval, and Fuzzy Approaches
Fariba Ansari, William Durrer, Soheil Nazarian, and Vladik Kreinovich

Published in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'03, Chicago, Illinois, July 24-26, 2003, pp. 420-425.

One of the most efficient non-destructive techniques for finding hidden faults in a plate is the use of ultrasonic Lamb waves. These waves are reflected by faults, and from this reflection, we can locate the faults. For that, we need to know how the Lamb waves propagate. Their propagation is determined by the dynamic elastic constants C_pq, so we must find these constants. These constants cannot be measured directly; instead, we measure the dependence of the speed of frequency c(f), and we must reconstruct C_pq from the measured values of c(f). In this paper, we show how this can be done in the presence of probabilistic, interval, and fuzzy uncertainty.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-17, May 2003
The Use of Fuzzy Measures as a Data Fusion Tool in Geographic Information Systems: Case Study
Cynthia Campos, G. Randy Keller, Vladik Kreinovich, Luc Longpre, Francois Modave, Scott A. Starks, and Roberto Torres

Published in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'03, Chicago, Illinois, July 24-26, 2003, pp. 365-370.

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we use fuzzy measures to address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earth's gravity field that we have compiled. We show that a natural duplicate deletion algorithm requires (in the worst case) quadratic time, and we propose a new asymptotically optimal O(n*log(n)) algorithm. These algorithms have been successfully applied to gravity databases. We believe that they will prove to be useful when dealing with many other types of point data.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-16, May 2003
Fast Quantum Algorithms for Handling Probabilistic, Interval, and Fuzzy Uncertainty
Mark Martinez, Luc Longpre, Vladik Kreinovich, Scott A. Starks, and Hung T. Nguyen

Published in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'03, Chicago, Illinois, July 24-26, 2003, pp. 395-400.

We show how quantum computing can speed up computations related to processing probabilistic, interval, and fuzzy uncertainty.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-15, May 2003
Dimension Compactification -- A Possible Explanation for Superclusters and for Empirical Evidence Usually Interpreted as Dark Matter
Vladik Kreinovich

According to modern quantum physics, at the microlevel, the dimension of space-time is at least 11; we only observe 4 dimensions because the others are compactified: the size along each of the other dimensions is much smaller than the macroscale. There is no universally accepted explanation of why exactly 4 dimensions remain at the microscopic level. A natural suggestion is: maybe there is no fundamental reason why exactly 4 dimensions should remain, maybe when we go to even larger scales, some of the 4 dimensions will be compactified as well? In this paper, we explore the consequences of the compactification suggestion, and we show that -- on the qualitative level -- these consequences have actually been already observed: as superclusters and as evidence that is usually interpreted as justifying dark matter. Thus, we get a new possible explanation of both superclusters and dark matter evidence -- via dimension compactification.

Original file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-14, May 2003
Updated version UTEP-CS-03-14b, July 2004
Computing Higher Central Moments for Interval Data
Vladik Kreinovich, Luc Longpre, Scott Ferson, and Lev Ginzburg

Higher central moments are very useful in statistical analysis: the third moment M3 characterizes asymmetry of the corresponding probability distribution, the fourth moment M4 describes the size of the distribution's tails, etc. When we know the exact values x1,...,xn, we can use the known formulas for computing the corresponding sample central moments. In many practical situations, however, we only know intervals [x1],...,[xn] of possible values of xi; in such situations, we want to know the range of possible values of Mm. In this paper, we propose algorithms that compute such ranges.

File in Compressed PostScript and pdf;
Revised version in Compressed PostScript and pdf


Technical Report UTEP-CS-03-13, April 2003
Detecting Cracks in Thin Plates by Using Lamb Wave Scanning: Geometric Approach
Roberto Osegueda, Vladik Kreinovich, Enrique Roldan, and Rodrigo Mares

Published in Geombinatorics, 2004, Vol. 14, No. 2, pp. 62-71.

A crack in a thin plate reflects ultrasonic waves; therefore, it is reasonable to determine the location of the crack by measuring the reflected waves. The problem of locating the crack can be reformulated in purely geometric terms. Previously, time-consuming iterative numerical methods were used to solve the resulting geometric problem. In this paper, we show that explicit (and fast to compute) formulas can be used instead.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-03-12, March 2003
Updated version UTEP-CS-03-12a, April 2003
Probabilities, Intervals, What Next? Optimization Problems Related to Extension Interval Computations to Situations with Partial Information about Probabilities
V. Kreinovich

Published in Journal of Global Optimization, 2004, Vol. 29, No. 3, pp. 265-280.

When we have only interval ranges [xi-,xi+] of sample values x1,...,xn, what is the interval [V-,V+] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V+ is NP-hard. We provide a feasible (quadratic time) algorithm for computing the exact lower bound V- on the variance of interval data. We also provide feasible algorithms that computes V+ under reasonable easily verifiable conditions, in particular, in case interval uncertainty is introduced to maintain privacy in a statistical database.

We also extend the main formulas of interval arithmetic for different arithmetic operations x1 op x2 to the case when, for each input xi, in addition to the interval [xi]=[xi-,xi+] of possible values, we also know its mean Ei (or an interval [Ei] of possible values of the mean), and we want to find the corresponding bounds for y = x1 op x2 and its mean. In this case, we are interested not only in the bounds for y, but also in the bounds for the mean of y. We formulate and solve the corresponding optimization problems, and describe remaining open problems.

Original file in Compressed PostScript and pdf
Updated file in Compressed PostScript and pdf


Technical Report UTEP-CS-03-11b, May 2003
Updated vesrion UTEP-CS-03-11c, May 2003
Robust Methodology for Characterizing System Response to Damage: A Subjective (Fuzzy) Partial Ordered Modification of the Traditional Utility-Probability Scheme
Carlos de la Mora, Piotr Wojciechowski, Vladik Kreinovich, Scott A. Starks, Paul J. Tanenbaum, and Alexandr V. Kuzminykh

Published in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'03, Chicago, Illinois, July 24-26, 2003, pp. 413-419.

Original file UTEP-CS-03-11b in Compressed PostScript and pdf
updated version UTEP-CS-03-11c in Compressed PostScript and pdf


Technical Report UTEP-CS-03-11, March 2003
Updated vesrion UTEP-CS-03-11a, May 2003
Robust Methodology for Characterizing System Response to Damage: Approach Based on Partial Order
Paul J. Tanenbaum, Carlos de la Mora, Piotr Wojciechowski, Olga Kosheleva, Vladik Kreinovich, Scott A. Starks, and Alexandr V. Kuzminykh

Published in: Ivan Lirkov, Svetozar Margenov, Jerzy Wasniewski, and Plamen Yalamov (eds.), Large-Scale Scientific Computing, Proceedings of the 4-th International Conference LSSC'2003, Sozopol, Bulgaria, June 4-8, 2003, Springer Lecture Notes in Computer Science, 2004, Vol. 2907, pp. 238-245.

To describe the response of engineering complex systems to various damage mechanics, engineers have traditionally use number-valued utilities to describe the results of different possible outcomes, and (number-valued) probabilities (often, subjective probabilities) to describe the relative frequency of different outcomes. This description is based on the assumption that experts can always make a definite preference between two possible outcomes, i.e., that the set of all outcomes is linearly (totally) ordered. In practice, experts often cannot make a choice, their preference is only a partial order.

Original file UTEP-CS-03-11 in Compressed PostScript and pdf
updated version UTEP-CS-03-11a in Compressed PostScript and pdf


Technical Report UTEP-CS-03-10d, May 2003
Outlier Detection under Interval and Fuzzy Uncertainty: Algorithmic Solvability and Computational Complexity
Vladik Kreinovich, Praveen Patangay, Luc Longpre, Scott A. Starks, Cynthia Campos, Scott Ferson, and Lev Ginzburg

Published in: Proceedings of the Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'03, Chicago, Illinois, July 24-26, 2003, pp. 410-406.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-10, March 2003
Updated version UTEP-CS-10e, June 2003
Final version UTEP-CS-10f, July 2004
Outlier Detection under Interval Uncertainty: Algorithmic Solvability and Computational Complexity
Vladik Kreinovich, Luc Longpre, Praveen Patangay, Scott Ferson, and Lev Ginzburg

Short version published in: Ivan Lirkov, Svetozar Margenov, Jerzy Wasniewski, and Plamen Yalamov (eds.), Large-Scale Scientific Computing, Proceedings of the 4-th International Conference LSSC'2003, Sozopol, Bulgaria, June 4-8, 2003, Springer Lecture Notes in Computer Science, 2004, Vol. 2907, pp. 276-283; full paper published in Reliable Computing, 2005, Vol. 11, No. 1, pp. 59-76.

In many application areas, it is important to detect outliers. Traditional engineering approach to outlier detection is that we start with some "normal" values x1,...,xn, compute the sample average E, the sample standard variation sigma, and then mark a value x as an outlier if x is outside the k0-sigma interval [E-k0*sigma,E+k0*sigma] (for some pre-selected parameter k0). In real life, we often have only interval ranges [xi] for the normal values x1,...,xn. In this case, we only have intervals of possible values for the bounds E-k0*sigma and E+k0*sigma. We can therefore identify outliers as values that are outside all k0-sigma intervals. In this paper, we analyze the computational complexity of these outlier detection problems, and provide efficient algorithms that solve some of these problems (under reasonable conditions).

Original file UTEP-CS-03-10a in Compressed PostScript and pdf;
updated version UTEP-CS-03-10e in Compressed PostScript and pdf;
final version UTEP-CS-03-10f in Compressed PostScript and pdf;
short version UTEP-CS-03-10b in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-09, February 2003
Detection of Cracks at Rivet Holes in Thin Plates Using Lamb-Wave Scanning
Roberto Osegueda, Vladik Kreinovich, Soheil Nazarian and Enrique Roldan

Published in: Tribikram Kundu (ed.), Smart Nondestructive Evaluation and Health Monitoring of Structural and Biological Systems II, Proceedings of the SPIE/International Society for Optical Engineering, Vol. 5047, San Diego, CA, March 3-5, 2003, pp. 55-66.

This paper describes a Lamb-wave scanning method for the detection of notches simulating cracks at rivet holes in thin plates. The approach requires the generation of an ultrasonic So-Mode Lamb wave using an incident transmitter excited with a tone burst centered at a near non-dispersive frequency. Area scans are performed of a plate with a hole with a notch to generate times series information which is used to create animations illustrating the wave propagation characteristics. The time series are subject to a sifting process to obtain intrinsic mode functions which contain narrow frequency banded information of the signals. The Hilbert-Huang transform is applied to the intrinsic mode functions which permit the computation of the signal energy as a function of time, proportional to the square of the amplitude of the analytical signal. Animations of the propagation of the Lamb-wave energy clearly illustrate that a potential scanning approach is to acquire time series along a line between the transmitter and the hole, capturing wave scattering from the hole and reflections from the notches. The times of flight and amplitude of the notch-reflected energy are used to calculate coordinates of the source of the reflections by a geometric approach. The series of identified coordinates outlines the extent of the notch at the rivet hole. Results of experiments conducted on thin square plates with a single hole with notches of various sizes compare favorably with the actual notches.

File in pdf.


Technical Report UTEP-CS-03-08, February 2003
Statistical and Dempster-Shafer Techniques in Testing Structural Integrity of Aerospace Structures
Roberto A. Osegueda, Seetharami R. Seelam, Bharat Mulupuru, and Vladik Kreinovich

Published in: Tribikram Kundu (ed.), Smart Nondestructive Evaluation and Health Monitoring of Structural and Biological Systems II, Proceedings of the SPIE/International Society for Optical Engineering, Vol. 5047, San Diego, CA, March 3-5, 2003, pp. 140-141.

Several techniques are known for non-destructive testing of aerospace structures, such as pulse echo, Eddy current, magnetic resonance, etc. Each of these techniques detects some faults but misses others, so it is desirable to combine (fuse) the results of these techniques. Several methods of data fusion are known. To improve the quality of fault detection, we modified the straightforward statistical method as follows: (1) we computed mean and variance iteratively: detected faults are excluded form the computation on the next iteration; (2) we treated the plate's edge and the inside separately; (3) we dismissed measurements in which only one technique detects a fault as possibly erroneous. The resulting method indeed leads to a much better fault detection.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-07, January 2003
Updated version UTEP-CS-03-07a, March 2003
Kolmogorov Complexity: How a Paradigm Motivated by Foundations of Physics Can Be Applied in Robust Control
Vladik Kreinovich and Isaak A. Kunin

To appear in Proceedings of the International Conference "Physics and Control" PhysCon'2003, Saint-Petersburg, Russia, August 20-22, 2003.

Born about three decades ago, Kolmogorov Complexity Theory (KC) led to important discoveries that, in particular, give a new understanding of the fundamental problem: interrelations between classical continuum mathematics and reality (physics, biology, engineering sciences, ...). Crudely speaking, it enables us to better distinguish between mathematical possible (possible abnormal) and physically possible situations.

We show that this formalization is not only in good accordance with theoretical physics, but it can also be applied to robust control: instead of requiring that the control work for all mathematically possible situations, we only require that the control works for all "non-abnormal" situations.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-06, January 2003
Can Quantum Computers Be Useful When There are Not Yet Enough Qubits?
Luc Longpre and Vladik Kreinovich<P> Published in Bulletin of the European Association for Theoretical Computer Science (EATCS), 2003, Vol. 79, pp. 164-169.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-05, January 2003
Complexity of Single-Agent and Equilibrium-Based Multiagent Planning and Plan Checking: Approach When We Have a List of Valid States
Chitta Baral and Vladik Kreinovich

In a recent paper M. Bowling, R. Jensen, and M. Veloso proposed a new formalization of the problem of multiagent planning - a formalization that is based on the (intuitively natural) notion of an equilibrium. In this paper we analyze the computational complexity of the corresponding equilibrium-based multiagent planning and plan checking. Within the traditional approach in which states are described by fluents, the computational complexity of planning under incompleteness is PSPACE-hard already for a single agent; therefore, to make a meaningful comparison between the complexity of single-agent and multi-agent planning, we analyze complexity in a different approach, in which a list of valid states is assumed to be explicitly given.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-04, January 2003
Updated version UTEP-CS-03-04a, October 2003
Interval Approach to Phase Measurements Can Lead to Arbitrarily Complex Sets - A Theorem and Ways Around It
Bharat C. Mulupuru, Vladik Kreinovich, and Roberto Osegueda

Published in Numerical Algorithms, 2004, Vol. 37, pp. 285-299.

We are often interested in phases of complex quantities; e.g., in non-destructive testing of aerospace structures, important information comes from phases of Eddy current and magnetic resonance.

For each measurement, we have an upper bound D on the measurement error dx=X-x, so when the measurement result is X, we know that the actual value x is in [X-D,X+D]. Often, we have no information about probabilities of different values, so this interval is our only information about x. When the accuracy is not sufficient, we perform several repeated measurements, and conclude that x belongs to the intersection of the corresponding intervals.

For real-valued measurements, the intersection of intervals is always an interval. For phase measurements, we prove that an arbitrary closed subset of a circle can be represented as an intersection of intervals.

Handling such complex sets is difficult. It turns out that if we have some statistical information, then the problem often becomes tractable. As a case study, we describe an algorithm that uses both real-valued and phase measurement results to determine the shape of a fault. This is important: e.g., smooth-shaped faults gather less stress and are, thus, less dangerous than irregularly shaped ones.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-03, January 2003
Updated version UTEP-03-03b, September 2003
Interval Arithmetic, Affine Arithmetic, Taylor Series Methods: Why, What Next?
Nedialko S. Nedialkov, Vladik Kreinovich, and Scott A. Starks

Published in Numerical Algorithms, 2004, Vol. 37, pp. 325-336.

In interval computations, the range of each intermediate result r is described by an interval [r]. To decrease excess interval width, we can keep some information on how r depends on the input x=(x1,...,xn). There are several successful methods of approximating this dependence; in these methods, the dependence is approximated by linear functions (affine arithmetic) or by general polynomials (Taylor series methods). Why linear functions and polynomials? What other classes can we try? These questions are answered in this paper.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-02, January 2003
Updated version UTEP-CS-03-02a, September 2003
A Feasible Algorithm for Locating Concave and Convex Zones of Interval Data and Its Use in Statistics-Based Clustering
Vladik Kreinovich, Eric J. Pauwels, Scott Ferson, and Lev Ginzburg

Published in Numerical Algorithms, 2004, Vol. 37, pp. 225-232.

Often, we need to divide n objects into clusters based on the value of a certain quantity x. For example, we can classify insects in the cotton field into groups based on their size and other geometric characteristics. Within each cluster, we usually have a unimodal distribution of x, with a probability density d(x) that increases until a certain value x0 and then decreases. It is therefore natural, based on d(x), to determine a cluster as the interval between two local minima, i.e., as a union of adjacent increasing and decreasing segments. In this paper, we describe a feasible algorithm for solving this problem.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-03-01, January 2003
Updated version UTEP-CS-03-01a, September 2003
A Full Function-Based Calculus of Directed and Undirected Intervals: Markov's Interval Arithmetic Revisited
Juergen Wolff von Gudenberg and Vladik Kreinovich

Published in Numerical Algorithms, 2004, Vol. 37, pp. 417-428.

This paper proposes a new interpretation of intervals as classes of functions having the same domain. Interval operations are seen as operations on these classes. This approach allows to recover Markov's directed interval arithmetic by taking into account the monotonicity of the functions.

Original file in Compressed PostScript and pdf;
updated file in Compressed PostScript and pdf.