University of Texas at El Paso
Computer Science Department
Abstracts of 2002 Reports


Technical Report UTEP-CS-02-33, December 2002
Why Benchmarking is an (Asymptotically) Optimal Approach to Numerical Methods: A Geombinatoric Proof
Vladik Kreinovich and Scott A. Starks

Published in Geombinatorics, 2004, Vol. 13, No. 3, pp. 131-138.

In numerical mathematics, one of the most frequently used ways of gauging the quality of different numerical methods is benchmarking. Specifically, once we have methods that work well on some (but not all) problems from a given problem class, we find the problem that is the toughest for the existing methods. This problem becomes a benchmark for gauging how well different methods solve problems that previous methods could not. Once we have a method that works well in solving this benchmark problem, we repeat the process again -- by selecting, as a new benchmark, a problem that is the toughest to solve by the new methods, and by looking for a new method that works the best on this new benchmark. At first glance, this idea sounds like a heuristic, but its success in numerical mathematics indicates that this heuristic is either optimal or at least close to optimality. In this paper, we use the geombinatoric approach to prove that benchmarking is indeed asymptotically optimal.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-32, November 2002
Intelligent Techologies
(Introduction to the Special Issue)
Vladik Kreinovich, Hung T. Nguyen, Nadipuram S. Prasad, and Pratit Santiprabhob

Published in International Journal of Intelligent Systems, 2004, Vol. 19, No. 1/2, pp. 1-8.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-31, November 2002
Updated version UTEP-CS-02-31a, January 2003
Towards Foundations of Processing Imprecise Data: From Traditional Statistical Techniques of Processing Crisp Data to Statistical Processing of Fuzzy Data
Hung T. Nguyen, Tonghui Wang, and Vladik Kreinovich

Published in Yingming Liu, Guoqing Chen, Mingsheng Ying and Kai-Yuan Cai (eds.), Proceedings of the International Conference on Fuzzy Information Processing: Theories and Applications FIP'2003, Beijing, China, March 1-4, 2003, Vol. II, pp. 895-900.

In traditional statistics, we process crisp data - usually, results of measurements and/or observations. Not all the knowledge comes from measurements and observations. In many real-life situations, in addition to the results of measurements and observations, we have expert estimates, estimates that are often formulated in terms of natural language, like "x is large". Before we analyze how to process these statements, we must be able to translate them in a language that a computer can understand. This translation of expert statements from natural language into a precise language of numbers is one of the main original objectives of fuzzy logic. It is therefore important to extend traditional statistical techniques from processing crisp data to processing fuzzy data. In this paper, we provide an overview of our related research.

Original file in Compressed PostScript and pdf;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-30, October 2002
Research on Advanced Soft Computing and its Applications
(Introduction to the Special Issue)
Vilem Novak, Irina Perfilieva, Hung T. Nguyen, and Vladik Kreinovich

Published in Soft Computing, 2004, Vol. 8, No. 4, pp. 239-246.

The main objective for the research presented in this special issue is to advance theoretical basis in soft computing, for the purpose of improving applications.

Why is this theoretical research needed? Because soft computing in general (and intelligent control and decision making in particular) are, in many aspects, still an art. To make this methodology easier to apply, we must use the experience of successful applications of fuzzy control, decision making or classification and extract formal rules that would capture this experience. To be able to do that efficiently, we must understand why some versions of soft computing methodology turned out to be more successful in some practical situations and less successful in others. In other words, to advance the practical success of soft computing methodology, we need further theoretical analysis of soft computing --- analysis targeted at enhancing its application abilities.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-29, October 2002
Updated version UTEP-CS-02-29b, October 2004
Convergence Properties of an Interval Probabilistic Approach to System Reliability Estimation
Cliff Joslyn and Vladik Kreinovich

Preliminary version published as Los Alamos National Laboratory Technical Report LA-UR-02-6261, Los Alamos, NM, 2002; full paper published in International Journal of General Systems, 2005, Vol. 34, No. 4, pp. 465-482.

Based on a black box model of a complex system, and on intervals and probabilities describing the known information about the inputs, we want to estimate the system's reliability. Using the results of tests performed on the system's computer model, we can estimate the lower and upper bounds of the probability that the system is in a desirable state. In this paper, we prove that these estimates are correct in the sense that under reasonable assumptions, these estimates converge to the actual probability bounds.

Original file in Compressed PostScript and pdf
updated version in pdf


Technical Report UTEP-CS-02-28, October 2002
Updated version UTEP-CS-28a, February 2003
A New Cauchy-Based Black-Box Technique for Uncertainty in Risk Analysis
Vladik Kreinovich and Scott Ferson

Published in Reliable Engineering and Systems Safety, 2004, Vol. 85, No. 1-3, pp. 267-279.

Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general formalism for handling different types of uncertainty, and to describe a new black-box technique for processing this type of uncertainty.

Original file in Compressed PostScript and pdf;
updated version in Compressed PostScript and pdf;
to download the corresponding C code, click here


Technical Report UTEP-CS-02-27, October 2002
Updated version UTEP-CS-02-27a, March 2003
Use of Fuzzy Expert's Information in Measurement and What We Can Gain from its Application in Geophysics
Leon Reznik, Vladik Kreinovich and Scott A. Starks

Published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2003, St. Louis, Missouri, May 25-28, 2003, pp. 1026-1031.

The paper considers the problem of measurement information fusion from different sources, when one of the sources is an information about approximate values of the measured variables or their combinations. The information is given with fuzzy models and is used in combination with the measurement results. The properties of the modified estimates are studied in comparison with the conventional ones. The conditions when an expert's information application can give a high gain are derived, the gain value is estimated, the recommendations to an expert on making predictions are given. The possible gain in measurement result efficiency in geophysical applications is analyzed.

Original file in MS Word;
updated version in MS Word and pdf


Technical Report UTEP-CS-02-26, October 2002
Revised version UTEP-CS-02-26c, July 2003
Eliminating Duplicates under Interval and Fuzzy Uncertainty: An Asymptotically Optimal Algorithm and its Geospatial Applications
Roberto Torres, G. Randy Keller, Vladik Kreinovich, and Luc Longpre

Published in Reliable Computing, 2004, Vol. 10, No. 5, pp. 401-422.

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain duplicate records, i.e., two or more close records representing the same measurement result. In this paper, we address the problem of detecting duplicates in a database consisting of point measurements. As a test case, we use a database of measurements of anomalies in the Earth's gravity field that we have compiled. In this paper, we show that a natural duplicate deletion algorithm requires (in the worst case) quadratic time, and we propose a new asymptotically optimal O(n log(n)) algorithm. These algorithms have been successfully applied to gravity databases. We believe that they will prove to be useful when dealing with many other types of point data.

Original file in Compressed PostScript and pdf;
+ revised version in Compressed PostScript and pdf


Technical Report UTEP-CS-02-25, October 2002
An IDL/ENVI Implementation of the FFT Based Algorithm for Automatic Image Registration
Hongjie Xie, Nigel Hicks, G. Randy Keller, Haitao Huang, and Vladik Kreinovich

Published in Computers and Geosciences, 2003, Vol. 29, No. 8, pp. 1045-1055.

Georeferencing images is a laborious process so schemes for automating this process have been under investigation for some time. Among the most promising automatic registration algorithms aare those based on Fast Fourier Transform (FFT). The displacement between the two given images can be computed by computing the ratio F1*conj(F2)/|F1*F2|, and then applying the inverse Fourier transform. The result is an impulse-like function, which is approximately zero everywhere except at the displacement that is necessary to optimally register the images. Coverting from rectangular coordinates to log-polar coordinates, shifts representing rotation and scaling can be also determined to complete the georectification process. Our FFT-based algorithm has been successfully implemented in IDL (Interactive Data Language) and added as two user functions to an image processing software package - ENVI (ENvironment for Visualizing Images) interface. ENVI handles all pre-processing and post-processing work such as input, output, display, filter, analysis, and file management. To test this implementation, several dozen tests were conducted on both simulated and "real world" images. The results of these tests show advantages and limitations of this algorithm. In particular, our tests show that the accuracy of the resulting registration is quite good compared to current manual methods.

File in MS Word


Technical Report UTEP-CS-02-24, September 2002
Updated version UTEP-CS-02-24a, November 2002
Designing Interdisciplinary Approaches to Problem Solving into Computer Languages
Daniel E. Cooke, Vladik Kreinovich, Joseph E. Urban

Published in Journal of Integrated Design and Process Science, 2002, Vol. 6, No. 3, pp. 29-43.

Many interdisciplinary design efforts require the involvement of computer scientists because of the complexity of the problem solving tools available for the projects. This paper demonstrates how appropriate language design can place high level languages in the hands of scientists and engineers, thus providing a more automated approach to problem solving that may reduce the amount of computer scientist involvement. The language SequenceL serves as an example of this approach.

Original file in MS Word;
updated version in MS Word and in pdf.


Technical Report UTEP-CS-02-23, September 2002
Revised version UTEP-CS-02-23c, March 2003
Short version UTEP-CS-02-23b, March 2003
Dirty Pages of Logarithm Tables, Lifetime of the Universe, and (Subjective) Probabilities on Finite and Infinite Intervals
Hung T. Nguyen, Vladik Kreinovich, and Luc Longpre

Short version published in Proceedings of the IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2003, St. Louis, Missouri, May 25-28, 2003, pp. 67-73. Full paper published in Reliable Computing, 2004, Vol. 10, No. 2, pp. 83-106.

To design data processing algorithms with the smallest average processing time, we need to know what this "average" stands for. At first glance, it may seem that real-life data are really "chaotic", and no probabilities are possible at all: today, we may apply our software package to elementary particles, tomorrow -- to distances between the stars, etc. However, contrary to this intuitive feeling, there are stable probabilities in real-life data. This fact was first discovered in 1881 by Simon Newcomb who noticed that the first pages of logarithm tables (that contain numbers starting with 1) are more used than the last ones (that contain numbers starting with 9). To check why, he took all physical constants from a reference book, and counted how many of them start with 1. An intuitive expectation is that all 9 digits should be equally probable. In reality, instead of 11%, about 30% of these constants turned out to be starting with 1. In general, the fraction of constants that start with a digit d can be described as ln(d+1)-ln(d). We describe a new interval computations-related explanation for this empirical fact, and we explain its relationship with lifetime of the Universe and with the general problem of determining subjective probabilities on finite and infinite intervals.

Short fersion in Compressed PostScript and pdf
Original version of the full paper in Compressed PostScript and pdf;
revised version of the full paper in Compressed PostScript and pdf


Technical Report UTEP-CS-02-22, August 2002
Updated version UTEP-CS-02-22a, November 2002
Updated version UTEP-CS-02-22b, March 2003
Exact Upper Bound on the Mean of the Product of Many Random Variables with Known Expectations
Vladik Kreinovich, Scott Ferson, and Lev Ginzburg

Published in Reliable Computing, 2003, Vol. 9, No. 6, pp. 441-463.

In practice, in addition to the intervals [xi] of possible values of inputs x1,...,xn, we sometimes also know their means Ei. For such cases, we provide an explicit exact (= best possible) upper bound for the mean of the product x1*...*xn of positive values xi.

Original file in Compressed PostScript and pdf
first updated version in Compressed PostScript and pdf
second updated version in Compressed PostScript and pdf


Technical Report UTEP-CS-02-21a, August 2002
Interval Computations Related to Privacy in Statistical Databases
Luc Longpre and Vladik Kreinovich

We show that the need to maintain privacy in statistical databases naturally leads to interval computations, and provide feasible algorithms for the corresponding interval computation problems.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-20, June 2002
Kolmogorov Complexity and Chaotic Phenomena
Vladik Kreinovich and Isaak A. Kunin

Published in International Journal of Engineering Science, 2003, Vol. 41, No. 3-5, pp. 483-493.

Born about three decades ago, Kolmogorov Complexity Theory (KC) led to important discoveries that, in particular, give a new understanding of the fundamental problem: interrelations between classical continuum mathematics and reality (physics, biology, engineering sciences, ...).

Specifically, in addition to the equations, physicists use the following additional difficult-to-formalize property: that the initial conditions and the value of the parameters must not be abnormal. We will describe a natural formalization of this property, and show that this formalization in good accordance with theoretical physics. At present, this formalization has been mainly applied to the foundations of physics. However, potentially, more practical engineering applications are possible.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-19, May 2002
Why is Selecting the Simplest Hypothesis (Consistent with Data) a Good Idea? A Simple Explanation
Vladik Kreinovich, Luc Longpre, Scott Ferson, and Lev Ginzburg

Published in Bulletin of the European Association for Theoretical Computer Science (EATCS), 2002, Vol. 77, pp. 191-194.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-18, May 2002
From Computation with Guaranteed Intervals to Computation with Confidence Intervals: A New Application of Fuzzy Techniques
Vladik Kreinovich, Hung T. Nguyen, Scott Ferson, and Lev Ginzburg

Published in the Proceedings of the 21st International Conference of the North American Fuzzy Information Processing Society NAFIPS'2002, New Orleans, Louisiana, June 27-29, 2002, pp. 418-422.

Traditional interval computations provide an estimate for the result y=f(x1,...,xn) of data processing when we know intervals X1,...,Xn that are guaranteed to contain the (unknown) actual values of the quantities x1,...,xn. Often, in addition to these guaranteed intervals, we have confidence intervals for these quantities, i.e., intervals Xi that contain the corresponding values xi with a certain probability. It is desirable, based on the confidence intervals for xi, to produce the resulting confidence interval for y. It turns out that the formulas for computing such resulting confidence interval are closely related with the formulas for processing fuzzy numbers by using Zadeh's extension principle. Thus, known algorithms for processing fuzzy data can be used to process confidence intervals as well.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-17, May 2002
Selecting a Fuzzy Logic Operation from the DNF-CNF Interval: How Practical are the Resulting Operations?
I. B. Turksen, A. Esper, K. Patel, S. A. Starks, and V. Kreinovich

Published in the Proceedings of the 21st International Conference of the North American Fuzzy Information Processing Society NAFIPS'2002, New Orleans, Louisiana, June 27-29, 2002, pp. 28-33.

In classical (two-valued) logic, CNF and DNF forms of each propositional formula are equivalent to each other. In fuzzy logic, CNF and DNF forms are not equivalent, they form an interval that contains the fuzzy values of all classically equivalent propositional formulas. If we want to select a single value from this interval, then it is natural to select a linear combination of the interval's endpoints. In particular, we can do that for CNF and DNF forms of "and" and "or", thus designing natural fuzzy analogues of classical "and" and "or" operations. The problem with thus selected "and" and "or" operations is that, contrary to common sense expectations, they are not associative. In this paper, we show the largest possible value of the corresponding non-associativity is reasonably small and thus, this non-associativity does not made these operations impractical.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-16, May 2002
Main Ideas Behind OWA Lead to a Universal and Optimal Approximation Scheme
Ronald R. Yager and Vladik Kreinovich

Published in the Proceedings of the 21st International Conference of the North American Fuzzy Information Processing Society NAFIPS'2002, New Orleans, Louisiana, June 27-29, 2002, pp. 428-433.

Ordered Weighted Averaging (OWA) operators have been successfully applied in many practical problems. We explain this empirical success by showing that these operators are indeed guaranteed to work (i.e., are universal), and that these operators are the best to use (in some reasonable sense).

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-15, May 2002
Fuzzy Measures and Integrals as Aggregation Operators: Solving the Commensurability Problem
Francois Modave and Vladik Kreinovich

Published in the Proceedings of the 21st International Conference of the North American Fuzzy Information Processing Society NAFIPS'2002, New Orleans, Louisiana, June 27-29, 2002, pp. 292-297.

The aim of this paper is to shed some light on the use of fuzzy measures and integrals as aggregation operators in multicriteria decision making. These techniques have been widely used on an ad hoc basis, but with no axiomatization. It is possible to obtain preference representation theorems in multicriteria decision making problems, relying on a formal parallelism between decision under uncertainty and multicriteria decision making. Though, it raises some commensurability problems. In this paper, we show how to obtain an axiomatization of multicriteria decision making problems, in a very natural way, and we show how to solve the commensurability problem in a particular case.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-14, March 2002
Detecting and Locating Curved Cracks in Thin Plates by Lamb Wave Reflection: Validated Geometric Approach
Roberto Osegueda and Vladik Kreinovich

Short version published in the Extended Abstracts of 2002 SIAM Workshop on Validated Computing, Toronto, Canada, May 23-25, 2002, pp. 149-151.

Lamb waves propagate through a thin plate, and thus, provide a way of scanning this plate and detecting cracks and other faults. The equations describing these waves are rather complex, and, as a result, it is difficult to extract, from the received signal, the location of the fault. Recently, a new geometric approach has been proposed which allows, for linear cracks, to determine the presence and the location of a crack by using only the geometry of wave propagation. In this paper, we extend this approach to a more realistic case of curved cracks, and apply interval techniques to provide guaranteed bounds for the fault locations.

Short version in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-13b, April 2002
Computing Variance for Interval Data is NP-Hard
Scott Ferson, Lev Ginzburg, Vladik Kreinovich, Luc Longpre, and Monica Aviles

Published in ACM SIGACT News, 2002, Vol. 33, No. 2, pp. 108-118.

When we have only interval ranges [xi-,xi+] of sample values x1,...,xn, what is the interval [V-,V+] of possible values for the variance V of these values? We prove that the problem of computing the upper bound V+ is NP-hard. We provide a feasible (quadratic time) algorithm for computing the lower bound V- on the variance of interval data. We also provide a feasible algorithm that computes V+ under reasonable easily verifiable conditions.

File in Compressed PostScript and pdf


Technical Report UTEP-CS-02-13d, October 2002
Final version UTEP-CS-02-13f, September 2004
Exact Bounds on Finite Populations of Interval Data
Scott Ferson, Lev Ginzburg, Vladik Kreinovich, Luc Longpre, and Monica Aviles

Published in Reliable Computing, 2005, Vol. 11, No. 3, pp. 207-233.

In this paper, we start research into using intervals to bound the impact of bounded measurement errors on the computation of bounds on finite population parameters ("descriptive statistics"). Specifically, we provide a feasible (quadratic time) algorithm for computing the lower bound on the finite population variance function of interval data. We prove that the problem of computing the upper bound on the finite population variance function of interval data is, in general, NP-hard. We provide a feasible algorithm that computes this upper bound under reasonable easily verifiable conditions, and provide preliminary results on computing other functions of finite populations.

Original file in Compressed PostScript and pdf
final version in Compressed PostScript and pdf


Technical Report UTEP-CS-02-13, March 2002
Exact Bounds on Sample Variance of Interval Data
Scott Ferson, Lev Ginzburg, Vladik Kreinovich, and Monica Aviles

Short version published in the Extended Abstracts of 2002 SIAM Workshop on Validated Computing, Toronto, Canada, May 23-25, 2002, pp. 67-69.

We provide a feasible (quadratic time) algorithm for computing the lower bound on the sample variance V of interval data. The problem of computing the upper bound on V is, in general, NP-hard. We provide a feasible algorithm that computes the upper bound on V for many reasonable situations.

File in Compressed PostScript and pdf
short version in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-12, March 2002
Updated version UTEP-CS-02-12b, October 2002
Absolute Bounds on the Mean of Sum, Product, Max, and Min: A Probabilistic Extension of Interval Arithmetic
Scott Ferson, Lev Ginzburg, Vladik Kreinovich, and Jorge Lopez

Short version published in the Extended Abstracts of 2002 SIAM Workshop on Validated Computing, Toronto, Canada, May 23-25, 2002, pp. 70-72; full paper published in Applied Mathematical Sciences, 2007, Vol. 1, No. 9, pp. 395-440.

We extend the main formulas of interval arithmetic for different arithmetic operations x1*x2 to the case when, for each input xi, in addition to the interval [xi]=[xi-,xi+] of possible values, we also know its mean Ei (or an interval [Ei] of possible values of the mean), and we want to find the corresponding bounds for x1*x2 and its mean.

First version (full text) in Compressed PostScript and pdf
short version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf


Technical Report UTEP-CS-02-11, March 2002
Revised version UTEP-CS-02-11b, November 2002
Final version UTEP-CS-02-11c, January 2003
Are There Easy-To-Check Necessary and Sufficient Conditions for Straightforward Interval Computations to Be Exact?
Vladik Kreinovich, Luc Longpre, and James J. Buckley

Short version published in the Extended Abstracts of 2002 SIAM Workshop on Validated Computing, Toronto, Canada, May 23-25, 2002, pp. 94-96; full paper published in Reliable Computing, 2003, Vol. 9, No. 5, pp. 349-358.

We prove that no "efficient" (easy-to-check) necessary and sufficient conditions are possible for checking whether straightforward interval computations lead to the exact result.

First version (full text) in Compressed PostScript and pdf
short version in Compressed PostScript and pdf;
Revised version in Compressed PostScript and pdf;
Final version in Compressed PostScript and pdf


Technical Report UTEP-CS-02-10, February 2002
Optimization Techniques under Uncertain Criteria, and Their Possible Use in Computerized Education
Vladik Kreinovich and Richard Alo

Published in Proceedings of the International Workshop on Research and Development of Human Communication Technologies for Conversational Interaction and Learning, Puebla, Mexico, January 19-20, 2002.

The existing successful automated computerized systems more or less simulate the way successful human teachers teach. However, computerized systems provide more individualized options that traditional classroom education, and it is desirable to use this additional freedom to further improve the education success rate. In this papers, we briefly overview the experience of a successful Russian training system, and explain how general techniques of optimization under uncertainty can be used to optimize the content development.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-09, January 2002
Use of Satellite Image Referencing Algorithms to Characterize Asphaltic Concrete Mixtures
Scott A. Starks, Soheil Nazarian, Vladik Kreinovich, and Joseph Adidhela

Published in Proceedings of FUZZ-IEEE'2002, Honolulu, Hawaii, May 12-17, 2002, Vol. 1, pp. 536-540.

A natural way to test the structural integrity of a pavement is to send signals with different frequencies through the pavement and compare the results with the signals passing through an ideal pavement. For this comparison, we must determine how, for the corresponding mixture, the elasticity E depends on the frequency f in the range from 0.1 to 10^5 Hz. It is very expensive to perform measurements in high frequency area (above 20 Hz). To avoid these measurements, we can use the fact that for most of these mixtures, when we change a temperature, the new dependence changes simply by scaling. Thus, instead of performing expensive measurements for different frequencies, we can measure the dependence of E on moderate frequencies f for different temperatures, and then combine the resulting curves into a single "master" curve. In this paper, we show how fuzzy techniques can help to automate this "combination".

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-08, January 2002
Computational Complexity of Planning with Discrete Time and Continuous State Variables
Chitta Baral and Vladik Kreinovich

Traditionally, most planning research in AI was concentrated on systems whose state can be characterized by discrete-valued fluents. In many practical applications, however, we want to control systems (like robots) whose state can only be described if we used continuous variables (like coordinates). Planning for such systems corresponds, crudely speaking, to Level 2 of the planing language PDDL2.1. In this paper, we analyze the computational complexity of such planning problems.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-07, January 2002
Probability of Implication, Logical Version of Bayes Theorem, and Fuzzy Logic Operations
Hung T. Nguyen, Masao Mukaidono, and Vladik Kreinovich

Published in Proceedings of FUZZ-IEEE'2002, Honolulu, Hawaii, May 12-17, 2002, pp. 530-535.

Logical inference starts with concluding that if B implies A, and B is true, then A is true as well. To describe probabilistic inference rules, we must therefore define the probability of an implication "A if B". There exist two different approaches to defining this probability, and these approaches lead to different probabilistic inference rules: We may interpret the probability of an implication as the conditional probability P(A|B), in which case we get Bayesian inference. We may also interpret this probability as the probability of the material implication "A or not B", in which case we get different inference rules. In this paper, we develop a general approach to describing the probability of an implication, and we describe the corresponding general formulas, of which Bayesian and material implications are particular cases. This general approach is naturally formulated in terms of t-norms, a terms which is normally encountered in fuzzy logic.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-06, January 2002
On Symmetric Solution Sets
Goetz Alefeld, Vladik Kreinovich, and Guenter Mayer

Published in: J. Herzberger (ed.), Inclusion Methods for Nonlinear Problems, Springer-Verlag, Wien, New York, 2003, pp. 1-23.

Given an n x n interval matrix [A] and an interval vector [b] with n components we present an overview on existing results on the solution set S of linear systems of equations Ax=b with symmetric matrices A from [A] and vectors b from [b]. Similarly we consider the set E of eigenpairs associated with the symmetric matrices A from [A]. We report on characterizations of S by means of inequalities, by means of intersection of sets, and by an approach which is generalizable to more general dependencies of the entries. We also recall two methods for enclosing S by means of interval vectors, and we mention a characterization of E.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-05, January 2002
Updated version UTEP-CS-02-05a, January 2002
Uncertainty in Risk Analysis: Towards a General Second-Order Approach Combining Interval, Probabilistic, and Fuzzy Techniques
Scott Ferson, Lev Ginzburg, Vladik Kreinovich, Hung T. Nguyen, and Scott A. Starks

Published in Proceedings of FUZZ-IEEE'2002, Honolulu, Hawaii, May 12-17, 2002, Vol. 2, pp. 1342-1347.

Uncertainty is very important in risk analysis. A natural way to describe this uncertainty is to describe a set of possible values of each unknown quantity (this set is usually an interval), plus any additional information that we may have about the probability of different values within this set. Traditional statistical techniques deal with the situations in which we have a complete information about the probabilities; in real life, however, we often have only partial information about them. We therefore need to describe methods of handling such partial information in risk analysis. Several such techniques have been presented, often on a heuristic basis. The main goal of this paper is to provide a justification for a general second-order formalism for handling different types of uncertainty.

Original file in Compressed PostScript and pdf;
Updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-04, January 2002
Non-Destructive Testing of Aerospace Structures: Granularity and Data Mining Approach
Roberto Osegueda, Vladik Kreinovich, Lakshmi Potluri, and Richard Alo

Published in Proceedings of FUZZ-IEEE'2002, Honolulu, Hawaii, May 12-17, 2002, Vol. 1, pp. 685-689.

For large aerospace structures, it is extremely important to detect faults, and non-destructive testing is the only practical way to do it. Based on measurements of ultrasonic waves, Eddy currents, magnetic resonance, etc., we reconstruct the locations of the faults. The best (most efficient) known statistical methods for fault reconstruction are not perfect. We show that the use of expert knowledge-based granulation improves the quality of fault reconstruction.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-03, January 2002
Was There Satan's Face in the World Trade Center Fire? A Geometric Analysis
Vladik Kreinovich and Dima Iourinski

Published in Geombinatorics, 2002, Vol. 12, No. 2, pp. 69-75.

Some photos of the 2001 World Trade Center fire reveal a "face" in the smoke which was interpreted, by some people, as the face of Satan. Most journalists believe, however, that the visible smoke configuration can be explained by natural processes, and that the visible "face" is similar to animal shapes that are sometimes observed in the clouds. In this paper, we present a simple geometric analysis that supports this natural-process explanation.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-02, January 2002
Updated short version UTEP-CS-02-02a, January 2002
Full paper UTEP-CS-02-02c, January 2006
Swarm Intelligence: Theoretical Proof that Empirical Techniques are Optimal
Dmitriy Iourinskiy, Scott A. Starks, Vladik Kreinovich, and Stephen F. Smith

Short version published in the Proceedings of the 2002 World Automation Congress, Orlando, Florida, June 9-13, 2002, Paper ISSCI005; full paper published in A. Abraham, C. Grosan, and V. Ramos (Eds.), Stigmergic Optimization, Springer-Verlag, Berlin-Heidelberg, 2006, pp. 281-295.

A natural way to distribute tasks between autonomous agents is to use swarm intelligence techniques, which simulate the way social insects (such as wasps) distribute tasks between themselves. In this paper, we theoretically prove that the corresponding successful biologically inspired formulas are indeed statistically optimal (in some reasonable sense).

Original file in Compressed PostScript and pdf;
updated short version in Compressed PostScript and pdf;
full paper in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-01a, January 2002
Open-Ended Configurations of Radio Telescopes: Geometrical Analysis
Vladik Kreinovich, Scott A. Starks, Dima Iourinski, Olga Kosheleva, and Andrei Finkelstein

Published in Geombinatorics, 2003, Vol. 13, No. 2, pp. 79-85.

File in Compressed PostScript and pdf.


Technical Report UTEP-CS-02-01, January 2002
Open-Ended Configurations of Radio Telescopes: Towards Optimal Design
Vladik Kreinovich, Scott A. Starks, Olga Kosheleva, and Andrei Finkelstein

Published in the Proceedings of the 2002 World Automation Congress, Orlando, Florida, June 9-13, 2002, Paper ISSCI007.

The quality of radio astronomical images drastically depends on where we place the radio telescopes. During the design of the Very Large Array, it was empirically shown that the power law design, in which n-th antenna is placed at a distance n^b from the center, leads to the best image quality. In this paper, we provide a theoretical justification for this empirical fact.

File in Compressed PostScript and pdf.