University of Texas at El Paso
Computer Science Department
Abstracts of 2000 Reports


Technical Report UTEP-CS-00-42, December 2000
Itanium's New Basic Operation of Fused Multiply-Add: Theoretical Explanation and Theoretical Challenge
Vladik Kreinovich

Published in ACM SIGACT News, 2001, Vol. 32, No. 1 (118), pp. 115-117.

A new Intel's 64-bit chip Itanium has a new instruction set which includes a fused multiply-add instruction x1+x2*x3. In this short article, we explain the empirical reasons behind the choice of this instruction, give possible theoretical explanation for this choice, and mention a related theoretical challenge.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-00-41, December 2000
Strassen's Algorithm Made (Somewhat) More Natural: A Pedagogical Remark
Ann Q. Gates and Vladik Kreinovich

Published in: Bulletin of the European Association for Theoretical Computer Science (EATCS), 2001, Vol. 73, pp. 142-145.

Strassen's 1969 algorithm for fast matrix multiplication is based on the possibility to multiply two 2 x 2 matrices A and B by using 7 multiplications instead of the usual 8. The corresponding formulas are an important part of any algorithms course, but, unfortunately, even in the best textbook expositions. they look very ad hoc. In this paper, we show that the use of natural symmetries can make these formulas more natural.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-00-40, December 2000
Updated version UTEP-00-40b, August 2007
m Solutions Good, m-1 Solutions Better
Luc Longpre, William Gasarch, G. William Walster, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2008, Vol. 2, No. 5, pp. 223-239.

One of the main objectives of theoretical research in computational complexity and feasibility is to explain experimentally observed difference in complexity.

Empirical evidence shows that the more solutions a system of equations has, the more difficult it is to solve it. Similarly, the more global maxima a continuous function has, the more difficult it is to locate them. Until now, these empirical facts have been only partially formalized: namely, it has been shown that problems with two or more solutions are more difficult to solve than problems with exactly one solution. In this paper, we extend this result and show that for every m, problems with exactly m solutions are more difficult to solve than problems with m-1 solutions.

Rephrasing Orwell's "Four legs good, two legs better", we can describe this result as "m solutions good, m-1 solutions better".

Original file in Compressed PostScript and pdf file
Updated version in Compressed PostScript and pdf file


Technical Report UTEP-CS-00-39: withdrawn


Technical Report UTEP-CS-00-38, October 2000
Updated version UTEP-CS-00-38a, January 2001
The Prospect for Answer Sets Computation by a Genetic Model
A. Bertoni, G. Grossi, A. Provetti, V. Kreinovich, and L. Tari

Published in Proceedings of 2001 AAAI Symposium on Answer Set Programming: Towards Efficient and Scalable Knowledge Representation and Reasoning, Stanford, CA, March 26-28, 2001, pp. 1-5.

We combine recent results from both Logic Programming and Genetic Algorithms to design a new method for the efficent computation of Answer Sets of logic programs. First of all the problem is reduced to the problem of finding a suitable coloring on directed graphs. Then the problem of finding a suitable coloring is relaxed to a combinatorial optimization problem and solved (in an approximate way) by a continuous discrete time system derived by a genetic model.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-37, October 2000
Updated version UTEP-CS-00-37a, March 2001
Updated version UTEP-CS-00-37b, July 2004
Optimal Finite Characterization of Linear Problems with Inexact Data
Vladik Kreinovich

Published in Reliable Computing, 2005, Vol. 11, No. 6, pp. 479-489.

For many linear problems, in order to check whether a certain property is true for all matrices A from an interval matrix [A], it is sufficient to check this property for finitely many "vertex" matrices. J. Rohn has discovered that we do not need to use all 2^(n^2) vertex matrices, it is sufficient to only check these properties for 2^(2n-1)<<2^(n^2) vertex matrices of a special type A_{yz}. In this paper, we show that a further reduction is impossible: without checking all 2^(2n-1) matrices A_{yz}, we cannot guarantee that the desired property holds for all A from [A]. Thus, these special vertex matrices provide an optimal finite characterization of linear problems with inexact data.

Original file in Compressed PostScript;
first updated version in Compressed PostScript and pdf;
second updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-36, August 2000
Fuzzy Logic and its Applications in Medicine
Nguyen Hoang Phuong and Vladik Kreinovich

Published in Proc. of Asian Pacific Medical Informatics Conference APAMI-MIC'2000, Hong Kong, September 27-30, 2000, pp. 1-11; full version published in International Journal of Medical Informatics, 2001, Vol. 62, No. 2-3, pp. 165-173.

Fuzzy set theory and fuzzy logic are a highly suitable and applicable basis for developing knowledge-based systems in medicine for tasks such as the interpretation of sets of medical findings, syndrome differentiation in Eastern medicine, diagnosis of diseases in Western medicine, mixed diagnosis of Integrated western and Eastern medicine, the optimal selection of medical treatments integrating western and eastern medicine, and for real-time monitoring of patient data etc. This was verified by trials with the following systems which were developed by our group in Vietnam: a fuzzy Expert System for Syndromes Differentiation in Oriental Traditional Medicine, an Expert System for Lung Diseases using fuzzy logic, Case Based Reasoning for Medical Diagnosis using fuzzy set theory, a diagnostic system combining disease diagnosis of Western Medicine with syndrome differentiation of Oriental Traditional Medicine, a fuzzy system for classification of Western and Eastern medicaments and finally, a fuzzy system for diagnosis and treatment of Integrated Western and Eastern Medicine. All the above mentioned systems are developed and are testing at the hospitals.

Pdf file.


Technical Report UTEP-CS-00-35, August 2000
Updated version UTEP-CS-00-35b, October 2002
Universal Approximation Theorem for Uninorm-Based Fuzzy Systems Modeling
Ronald R. Yager and Vladik Kreinovich

To appear in Fuzzy Sets and Systems.

Most existing universal approximation results for fuzzy systems are based on the assumption that we use t-conorms and t-conorms to represent "and" and "or". Yager has proposed to use, within the fuzzy system modeling paradigm, more general operations based on uninorms. In this paper, we show that the universal approximation property holds for an arbitrary choice of a uninorm.

Original compressed PostScript file; updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-34, August 2000
Updated version UTEP-CS-00-34a, September 2000
Updated version UTEP-CS-00-34c, December 2001
Statistical and Dempster-Shafer Techniques in Testing Structural Integrity of Aerospace Structures
Roberto A. Osegueda, Seetharami R. Seelam, Ana C. Holguin, Vladik Kreinovich, and Chin-Wang Tao

Preliminary version published in Proc. of International Conference on Intelligent Technologies, Bangkok, Thailand, December 13-15, 2000, pp. 383-389; final version published in the International Journal of Uncertainty, Fuzziness, and Knowledge Based Systems, 2001, Vol. 9, No. 6, pp. 749-758.

We describe the existing statistics-related methods of testing structural integrity for aerospace structures, describe their drawbacks, how they can be overcome, and compare the resulting techniques.

Original compressed PostScript file; updated version(compressed postscript).;
final version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-33, August 2000
Updated version UTEP-00-33a, March 2003
On the Relation Between Two Approaches to Combining Evidence: Ordered Abelian Groups and Uninorms
Ronald R. Yager and Vladik Kreinovich

Published in Journal of Intelligent and Fuzzy Systems, 2003, Vol. 14, No. 1, pp. 7-12.

In this paper, we develop a relationship between two approaches to combining evidence: the Ordered Abelian Group (OAG) approach and the uninorm approach. We show that while there exist uninorms that are not extended OAG's it turns out that for operations which are continuous (in some reasonable sense), these two approaches coincide.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-32a, October 2000
Updated version UTEP-CS-00-32b, March 2001
An Even More Realistic (Non-Associative) Interval Logic and its Relation to Psychology of Human Reasoning
I. R. Goodman, Raul A. Trejo, Vladik Kreinovich, Jesus Martinez, and Reginaldo Gonzalez

Published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 1586-1591.

Original file in Compressed PostScript and pdf;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-32, August 2000
Updated version UTEP-CS-00-32d, November 2001
A Realistic (Non-Associative) Logic and a Possible Explanations of 7+-2 Law
Raul Trejo, Vladik Kreinovich, I. R. Goodman, Jesus Martinez, and Reginaldo Gonzalez

Published in International Journal of Approximate Reasoning, 2002, Vol. 29, pp. 235-266.

When we know the subjective probabilities (degrees of belief) p1 and p2 of two statements S1 and S2, and we have no information about the relationship between these statements, then the probability of S1&S2 can take any value from the interval [max(p1+p2-1,0),min(p1,p2)]. If we must select a single number from this interval, the natural idea is to take its midpoint. The corresponding "and" operation p1&p_2=(1/2)(max(p1+p2-1,0)+min(p1,p2)) is not associative. However, since the largest possible non-associativity degree |(a&b)&c-a&(b&c)| is equal to 1/9, this non-associativity is negligible if the realistic "granular" degree of belief have granules of width <=1/9. This may explain why humans are most comfortable with <=9 items to choose from (the famous "7 plus minus 2" law).

We also show that the use of interval computations can simplify the (rather complicated) proofs.

Original file in Compressed PostScript; PDF;
Updated file in Compressed PostScript; PDF.


Technical Report UTEP-CS-00-31, August 2000
Updated version UTEP-CS-00-31a, September 2000
Optimal Elimination of Inconsistency in Expert Knowledge: Formulation of the Problem, Fast Algorithms
Timothy J. Ross, Berlin Wu, and Vladik Kreinovich

Published in Proc. of International Conference on Intelligent Technologies, Bangkok, Thailand, December 13-15, 2000, pp. 450-458.

Expert knowledge is sometimes inconsistent. In this paper, we describe the problem of eliminating this inconsistency as an optimization problem, and present fast algorithms for solving this problem.

Original compressed PostScript file; Final version (compressed postscript)..


Technical Report UTEP-CS-00-30, August 2000
Updated version UTEP-CS-00-30a, September 2000
On Approximation of Fuzzy Sets by Crisp Sets: From Continuous Control-Oriented Defuzzification to Discrete Decision Making
Hung T. Nguyen, Witold Pedrycz, and Vladik Kreinovich

Published in Proc. of International Conference on Intelligent Technologies, Bangkok, Thailand, December 13-15, 2000, pp. 254-260.

In this paper, we show that the necessity to make crisp decisions in uncertain (fuzzy) situations leads to the necessity to "approximate" fuzzy sets by crisp sets. We show that seemingly natural approximation ideas - such as using alpha-cut for a given alpha - often do not work, and we describe new approximations which not only work, but which are optimal in some reasonable sense.

Original compressed PostScript file; Final version (compressed postscript)..


Technical Report UTEP-CS-00-29, July 2000
Extracting Fuzzy Sparse Rule Base by Cartesian Representation and Clustering
Yeung Yam, Vladik Kreinovich, and Hung T. Nguyen

Published in Proceedings of the 2000 IEEE Conference on Systems, Man, and Cybernetics SMC'2000, Nashville, TN, October 8-10, pp. 2778-2783.

Sparse rule base and interpolation have been proposed as possible solution to alleviate the geometric complexity problem of large fuzzy set. However, no formal method to extract sparse rule base is yet available. This paper combines the recently introduced Cartesian representation of membership functions and a mountain method-based clustering technique for extraction. A case study is included to demonstrate the effectiveness of the approach.

Compressed PostScript file.


Technical Report UTEP-CS-00-28, July 2000
On Representation and Approximation of Operations in Boolean Algebras
I. R. Goodman and Vladik Kreinovich

Published in the International Journal of Intelligent Systems, 2001, Vol. 16, pp. 647-653.

Several universal approximation and universal representation results are known for non-Boolean multi-valued logics such as fuzzy logics. In this paper, we show that similar results can be proven for multi-valued Boolean logics as well.

Compressed PostScript file.


Technical Report UTEP-CS-00-27, June 2000
A New Universal Approximation Result for Fuzzy Systems, which Reflects CNF-DNF Duality
Irina Perfilieva and Vladik Kreinovich

To appear in International Journal of Intelligent Systems.

There are two main fuzzy system methodologies for translating expert rules into a logical formula: In Mamdani's methodology, we get a DNF formula (disjunction of conjunctions), and in a methodology which uses logical implications, we get, in effect, a CNF formula (conjunction of disjunctions). For both methodologies, universal approximation results have been proven which produce, for each approximated function f(x), two different approximating relations RDNF(x,y) and RCNF(x,y). Since in fuzzy logic, there is a known relation FCNF(x)<=FDNF(x) between CNF and DNF forms of a propositional formula F, it is reasonable to expect that we would be able to prove the existence of approximations for which a similar relation RCNF(x,y)<=RDNF(x,y) holds. Such existence is proved in our paper.

Compressed PostScript file.


Technical Report UTEP-CS-00-26a, July 2000
Why 95% and Two Sigma? A Theoretical Justification for an Empirical Measurement Practice
Hung T. Nguyen, Vladik Kreinovich, and Chin-Wang Tao

Proc. International Workshop on Intelligent Systems Resolutions: The 8th Bellman Continuum, Taipei, Taiwan, December 11-12, 2000, pp. 358-362.

The probability p(k) that the value of a random variable is far away from the mean (e.g. further than k standard deviations away) is so small that this possibility can be often safely ignored. It is desirable to select k for which the dependence of the probability p(k) on the distribution is the smallest possible. Empirically, this dependence is the smallest for k between 1.5 and 2.5. In this paper, we give a theoretical explanation for this empirical result.

Compressed PostScript file.


Technical Report UTEP-CS-00-26, May 2000
Why Two Sigma? A Theoretical Justification
Hung T. Nguyen, Vladik Kreinovich, Gennady N. Solopchenko, and Chin-Wang Tao

Published in: L. Reznik and V. Kreinovich (eds.), Soft Computing in Measurements and Information Acquisition, Springer-Verlag, 2003, pp. 10-22.

For a normal distribution, the probability density is everywhere positive, so in principle, all real numbers are possible. In reality, the probability that a random variable is far away from the mean is so small that this possibility can be often safely ignored. Usually, a small real number k is picked (e.g., 2 or 3); then, with a probability P0(k)~1 (depending on k), the normally distributed random variable with mean a and standard deviation sigma belongs to the interval A=[a-k*sigma,a+k*sigma].

The actual error distribution may be non-Gaussian; hence, the probability P(k) that a random variable belongs to A differs from P0(k). It is desirable to select k for which the dependence of P0(k) on the distribution is the smallest possible. Empirically, this dependence is the smallest for k from the interval [1.5,2.5]. In this paper, we give a theoretical explanation for this empirical result.

Compressed PostScript file.


Technical Report UTEP-CS-00-25c, January 2002
On the Optimal Choice of Quality Metric in Image Compression
Olga Kosheleva, Vladik Kreinovich, and Hung T. Nguyen

Published in the Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation, Santa Fe, New Mexico, USA, April 7-9, 2002, pp. 116-120.

File in Compressed PostScript and in pdf.


Technical Report UTEP-CS-00-25a, June 2000
On the Optimal Choice of Quality Metric in Image Compression
Olga Kosheleva, Vladik Kreinovich, and Yeung Yam

Published in the Proceedings of the International Symposium on Smart Structures and Microsystems (IS3M), Hong Kong, October 19-21, 2000, Paper C3-3.

Compressed PostScript file.


Technical Report UTEP-CS-00-25, May 2000
Updated version UTEP-CS-00-25b, August 2000
On the Optimal Choice of Quality Metric in Image Compression A Soft Computing Approach
Hung T. Nguyen, Olga Kosheleva, Vladik Kreinovich, and Liya Ding

Published in Proceedings of the Sixth International Conference on Control, Automation, Robotics and Vision ICARCV'2000, Singapore, December 5--8, 2000.
In a lossy compression, the reconstructed image I' differs from the original image I. In different situations, different compressions lead to different quality reconstruction, so it is important to select, in each situation, the best compression method. It's natural to select the compression method for which the average value of some quality metric d(I,I') is the smallest. Which quality metric should we choose? We show that under reasonable symmetry conditions, L^p metrics d(I,I')=integral of |I(x)-I'(x)|^p are the best, and how to compute the optimal value of p from the expected relative size of the informative part of the image.

The original and updated files (both in Compressed PostScript).


Technical Report UTEP-CS-00-24b, October 2000
Updated version UTEP-CS-00-24e, June 2001
Interval Methods in Remote Sensing: Reliable Sub-Division of Geological Areas
David D. Coblentz, G. Randy Keller, Vladik Kreinovich, Jan Beck, and Scott A. Starks

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-24, May 2000
Updated version UTEP-CS-00-24a, June 2000
Towards Reliable Sub-Division of Geological Areas: Interval Approach
David D. Coblentz, Vladik Kreinovich, Brian S. Penn, and Scott A. Starks

Published in Proceedings of the 19th International Conference of the North American Fuzzy Information Society NAFIPS'2000, Atlanta, Georgia, July 13-15, 2000, pp. 368-372;
extended version published in: L. Reznik and V. Kreinovich (eds.), Soft Computing in Measurements and Information Acquisition, Springer-Verlag, 2003, pp. 223-233.

An appropriate subdivision of a geophysical area into segments enables us to extrapolate the results obtained in some locations within the segment (where extensive research was done) to other locations within the same segment, and thus, get a good understanding of the locations which weren't thoroughly analyzed.

Often, different evidence and different experts' intuition support different subdivisions schemes. For example, in our area - Rio Grande rift zone - there is some geochemical evidence that this zone is divided into three segments, but, in the viewpoint of many researchers, this evidence is not yet sufficiently convincing.

We show that if we use topographical information (this information, e.g., comes from satellite photos), then interval methods lead to a reliable justification for the tripartite subdivision of the Rio Grande rift zone.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-23, May 2000
Granularity as an Optimal Approach to Uncertainty - A General Mathematical Idea with Applications to Sleep, Consumption, Traffic Control, Learning, etc.
Vladik Kreinovich and Hung T. Nguyen

Published in Proceedings of the 19th International Conference of the North American Fuzzy Information Society NAFIPS'2000, Atlanta, Georgia, July 13-15, 2000, pp. 316-320.

Traditional statistical and fuzzy approaches to describing uncertainty are continuous in the sense that we use a (potentially infinite) set of values from the interval [0,1] to characterize possible degrees of uncertainty. In reality, experts describe their degree of belief by using one of the finitely many words from natural language; in this sense, the actual description of expert uncertainty is granular.

In this paper, we show that in some reasonable sense, granularity is the optimal way of describing uncertainty. A similar mathematical idea explains similar "granularity" in such diverse areas as sleep, consumption, traffic control, and learning.

Compressed PostScript file; PDF file.


Technical Report UTEP-CS-00-22, May 2000
Fuzzy (Granular) Levels of Quality, with Applications to Data Mining and to Structural Integrity of Aerospace Structures
Roberto A. Osegueda, Carlos Ferregut, Vladik Kreinovich, Seelam Seetharami, and Harry Schulte

Published in Proceedings of the 19th International Conference of the North American Fuzzy Information Society NAFIPS'2000, Atlanta, Georgia, July 13-15, 2000, pp. 348-352.

Experts usually describe quality by using words from natural language such as "perfect", "good", etc. In this paper, we deduce natural numerical values corresponding to these words, and show that these values explain empirical dependencies uncovered in data mining and in the analysis of structural integrity of aerospace structures.

Compressed PostScript file; PDF file.


Technical Report UTEP-CS-00-21, May 2000
How Important is Theory for Practical Problems? A Partial Explanation of Hartmanis' Observation
Vladik Kreinovich and Luc Longpre

Published in Bulletin on the European Association for Theoretical Computer Science EATCS, 2000, Vol. 71, pp. 160-164.

Compressed PostScript file.


Technical Report UTEP-CS-00-20a, June 2000
Asymptotically Optimal Algorithms for Weather Applications of Smart Dust
Edward Vidal, Luc Longpre, Vladik Kreinovich, Huang Haitao, and Yeung Yam

Published in the Proceedings of the International Symposium on Smart Structures and Microsystems (IS3M), Hong Kong, October 19-21, 2000, Paper C3-2.

Compressed PostScript file.


Technical Report UTEP-CS-00-20, May 2000
Updated version UTEP-CS-00-20c, October 2000
Geombinatorics of "Smart Dust"
Edward Vidal, Luc Longpre, Vladik Kreinovich, and Huang Haitao

Published in Geombinatorics, 2001, Vol. 11, No. 2, pp. 54-58.

Smart Dust is a collection of small sensor-equipped leaves which send their information to two or more receivers. When a receiver gets a signal from a sensor, it can determine the direction from which this signal came. By combining the directions from two different receivers, we can determine the 3-D locations of all the leaves, and thus, transform their sensor readings into a 3-D picture of the corresponding parameters (temperature, moisture, etc.). The more leaves we send, the more information we gather. However, since the direction can only be measured with a certain accuracy, when we send too many leaves, we lose the ability to match their directions and thus, we can no longer reconstruct the leaves' 3-D locations. Thus, there is the optimal number of leaves for which we can get the largest amount of information. Determining this optimal number is an open problem.

Original compressed PostScript file; Final version (compressed postscript)..


Technical Report UTEP-CS-00-19, April 2000
Towards Optimal Mosaicking of Multi-Spectral Images
Francisco Garcia, Roberto Araiza, and Brain Rzycki

Published in the Proceedings of the Second National NASA Student Conference on Earth Sciences, Aero-Space Technology, Human Exploration and Development of Space, Space Science, and Crosscutting Technologies, Nashville, Tennessee, April 7-10, 2000, pp. 261-266.

To cover a certain area, it is often necessary to combine several satellite photos. To get a proper combination, we need to appropriately position and orient these photos relative to one another, i.e., mosaic these photos. With the new generation of multi-spectral satellites, for each area, we have several hundred images which correspond to different wavelengths. At present, when we mosaic two images, we only use one of the wavelengths and ignore the information from the other wavelengths. It is reasonable to decrease the mosaicking error by using images corresponding to all possible wavelengths in mosaicking. In this paper, we present an algorithm for such optimal mosaicking.

Compressed PostScript file; pdf file.


Technical Report UTEP-CS-00-18, March 2000
Updated version UTEP-CS-00-18a, May 2000
Integrating Domain Knowledge with Data: From Crisp to Probabilistic and Fuzzy Knowledge
Hung T. Nguyen, Vladik Kreinovich, and Leonid Reznik

Published in Proceedings of the UAI-2000 Workshop on Fusion of Domain Knowledge with Data for Decision Support, Stanford University, June 30, 2000.

It is well known that prior knowledge about the domain can improve (often drastically) the accuracy of the estimates of the physical quantities in comparison with the estimates which are solely based on the measurement results. In this paper, we show how a known method of integrating crisp domain knowledge with data can be (naturally) extended to the case when the domain knowledge is described in statistical or fuzzy terms.

First version and updated version (both in compressed PostScript).


Technical Report UTEP-CS-00-17, February 2000
Error Estimations for Indirect Measurements: Randomized vs. Deterministic Algorithms for "Black-Box" Programs
Vladik Kreinovich and Raul Trejo

Published in: Sanguthevar Rajasekaran, Panos Pardalos, John Reif, and Jose Rolim (eds.), Handbook on Randomized Computing, Kluwer, 2001, pp. 673-729.

In many real-life situations, it is very difficult or even impossible to directly measure the quantity y in which we are interested: e.g., we cannot directly measure a distance to a distant galaxy or the amount of oil in a given well. Since we cannot measure such quantities directly, we can measure them indirectly: by first measuring some relating quantities x1,...,xn, and then by using the known relation between xi and y to reconstruct the value of the desired quantity y.

In practice, it is often very important to estimate the error of the resulting indirect measurement. In this paper, we describe and compare different deterministic and randomized algorithms for solving this problem in the situation when a program for transforming the estimates X1,...,Xn for xi into an estimate for y is only available as a black box (with no source code at hand).

We consider this problem in two settings: statistical, when measurements errors di=Xi-xi are independent Gaussian random variables with 0 average and known standard deviations Di, and interval, when the only known information about di is that |di| cannot exceed a known bound Di. In statistical setting, we describe the optimal error estimation algorithm; in interval setting, we describe a new algorithm which may be not optimal but which is better than the previously known ones.

Compressed PostScript file; pdf file.


Technical Report UTEP-CS-00-16, February 2000
Updated version UTEP-CS-00-16a, March 2001
Computing the Shape of the Image of a Multi-Linear Mapping is Possible but Computationally Intractable: Theorems
Raul Trejo and Vladik Kreinovich

Publsihed in International Journal of General Systems, 2002, Vol. 31, No. 1, pp. 17-27.

In systems without inertia (or with negligible inertia), a change in the values of control variables x1,...,xn leads to the immediate change in the state z of the system. In more precise terms, for such systems, every component zi of the state vector z=(z1,...,zd) is a function of the control variables. When we know what state z we want to achieve, the natural question is: can we achieve this state, i.e., are there values of the control variables which lead to this very state?

The simplest possible functional dependence is described by linear functions. For such functions, the question of whether we can achieve a given state z reduces to the solvability of the corresponding system of linear equations; this solvability can be checked by using known (and feasible) algorithms from linear algebra.

Next in complexity is the case when instead of a linear dependence, we have a multi-linear dependence. In this paper, we show that for multi-linear functions, the controllability problem is, in principle, algorithmically solvable, but it is computationally hard (NP-hard).

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-15, February 2000
On Granularity in Fuzzy Logic: Minimum and Maximum are the Only Absolutely Granular t-Norm and t-Conorm
Vladik Kreinovich and Bernadette Bouchon-Meunier

Compressed PostScript file.


Technical Report UTEP-CS-00-14, February 2000
Allowing Two Moves in Succession Increases the Game's Bias: A Theorem
Vladik Kreinovich

Published in International Journal of Intelligent Systems, 2001, Vol. 16, No. 1, pp. 209-213.

Compressed PostScript file.


Technical Report UTEP-CS-00-13, February 2000
Updated version UTEP-CS-00-13a, May 2000
Computational Complexity of Planning Based on Partial Information about the System's Present and Past States
Chitta Baral, Le-Chi Tuan, Raul Trejo, and Vladik Kreinovich

Published in: J. Lloyd et al. (eds.), Proceedings of the First International Conference on Computational Logic CL'2000, London, July 24-28, 2000, Springer Lecture Notes in Artificial Intelligence, Vol. 1861, pp. 882-896.

Planning is a very important AI problem, and it is also a very time-consuming AI problem. To get an idea of how complex different planning problems are, it is useful to describe the computational complexity of different general planning problems. This complexity has been described for problems in which planning is based on the (complete or partial) information about the current state of the system. In real-life planning, in addition to this information, we often also use the knowledge about the system's past behavior. To describe such more realistic planning situations, a special language L was developed in 1997 by C. Baral, M. Gelfond and A. Provetti. In this paper, we expand the known results about computational complexity of planning (including our own previous results about complexity of planning in the planning language A) to this more general class of planning problems.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-12, January 2000
Invariance-Based Justification of the Maximum Entropy Method and of Generalized Maximum Entropy Methods in Data Processing
Hung T. Nguyen, Olga M. Kosheleva, and Vladik Kreinovich

Maximum entropy method and its generalizations are very useful in data processing. In this paper, we show that these methods naturally follow from reasonable invariance requirements.

Compressed PostScript file.


Technical Report UTEP-CS-00-11, January 2000
1st updated version UTEP-CS-00-11a, August 2007
2nd updated version UTEP-CS-00-11b, October 2007
3rd updated version UTEP-CS-00-11c, October 2008
Asymmetric Information Measures: How to Extract Knowledge from an Expert So That the Expert's Effort is Minimal
Hung T. Nguyen, Vladik Kreinovich, and Elizabeth N. Kamoroff

Published in Proceedings of the International Conference on Information Technology InTech'07, Sydney, Australia, December 12-14, 2007, pp. 136-145; full version published in International Journal of Automation and Control (IJAAC), 2008, Vol. 2, No. 2/3, pp. 153-177.

Knowledge acquisition is when we ask experts questions, and put the answers into the computer system. Since this is a very time-consuming task, it is desirable to minimize the effort of an expert.

As a crude estimate for this effort, we can take a number of binary (yes-no) questions that we ask. The procedure that minimizes this number is binary search.

This approach does not take into account that people often feel more comfortable answering "yes" than answering "no". So, to make our estimates more realistic, we will take into consideration that for a negative answer the effort is bigger.

This paper describes a procedure that minimizes the effort of an expert. We also estimate the effort of this "optimal" search procedure.

Original file in Compressed PostScript;
1st updated version in pdf and in Compressed PostScript;
2nd updated version in pdf and in Compressed PostScript;
3rd updated version in pdf and in Compressed PostScript.


Technical Report UTEP-CS-00-10, January 2000
Updated version UTEP-CS-00-10a, April 2000
Towards Feasible Approach to Plan Checking under Probabilistic Uncertainty: Interval Methods
Raul Trejo, Vladik Kreinovich, and Chitta Baral

Published in Proc. of the 17th National Conference on Artificial Intelligence AAAI'2000, Austin, TX, July 30-August 3, 2000, pp. 545-550.

The main problem of planning is to find a sequence of actions that an agent must perform to achieve a given objective. An important part of planning is checking whether a given plan achieves the desired objective. Historically, in AI, the planning and plan checking problems were mainly formulated and solved in a deterministic environment, when the initial state is known precisely and when the results of each action in each state is known (and uniquely determined). In this deterministic case, planning is difficult, but plan checking is straightforward. In many real-life situations, we only know the probabilities of different fluents; in such situations, even plan checking becomes computationally difficult. In this paper, we describe how methods of interval computations can be used to get a feasible approximation to plan checking under probabilistic uncertainty. The resulting method is a natural generalization of 0-approximation proposed earlier to describe planning in the case of partial knowledge. It turns out that some of the resulting probabilistic techniques coincides with heuristically proposed "fuzzy" methods. Thus, we justify these fuzzy heuristics as a reasonable feasible approximation to the (NP-hard) probabilistic problem.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-09, January 2000
Computational Complexity of Planning, Diagnosis, and Diagnostic Planning in the Presence of Static Causal Laws
Chitta Baral, Le Chi Tuan, and Vladik Kreinovich

Planning is a very important AI problem, and it is also a very time-consuming AI problem. To get an idea of how complex different planning problems are, it is useful to describe the computational complexity of different general planning problems. This complexity has been described for problems in which the result res(a,s) of applying an action a to a system in a state s is uniquely determined by the action a and by the state s. In real-life planning, some consequences of certain actions are non-deterministic. In this paper, we expand the known results about computational complexity of planning (with and without sensing) to this more general class of planning problems.

In addition to analyzing computational complexity of regular planning - in which the goal is to achieve a certain property of the system - we also analyze the computational complexity of a simpler problem - of diagnosing the system.

Compressed PostScript file.


Technical Report UTEP-CS-00-08, January 2000
Updated version UTEP-CS-00-08b, September 2000
Updated version UTEP-CS-00-08c, December 2001
From Planning to Searching for the Shortest Plan: An Optimal Transition
Raul A. Trejo, Joel Galloway, Charanjiv Sachar, Vladik Kreinovich, Chitta Baral, and Le Chi Tuan

Preliminary version published in Proc. of International Conference on Intelligent Technologies, Bangkok, Thailand, December 13-15, 2000, pp. 17-23; final version published in the International Journal of Uncertainty, Fuzziness, and Knowledge Based Systems, 2001, Vol. 9, No. 6, pp. 827-838.

Since Kautz and Selman's 1992 ECAI paper on satisfiability based planning, there has been several work on planning through finding models of a logical theory. Most of these works focus on finding a plan of a given length. If we want to find the shortest plan, then usually, we try plans of length 1, 2, ..., until we find the first length for which such a plan exists. When the planning problem is difficult and the shortest plan is of a reasonable length, this linear search can take a long time; to speed up the process, it has been proposed (by Lifschitz and others) to use binary search instead. Binary search for the value of a certain parameter x is optimal when for each tested value x, we need the same amount of computation time; in planning, the computation time increases with the size of the plan and, as a result, binary search is no longer optimal. We describe an optimal way of combining planning algorithms into a search for the shortest plan - optimal in the sense of worst-case complexity. We also describe an algorithm which is asymptotically optimal in the sense of average complexity.

Original compressed PostScript file; updated version (compressed postscript);
final version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-07, January 2000
Updated version UTEP-CS-00-07a, February 2000
Chu Spaces: Towards New Justification for Fuzzy Heuristics
Nhu Nguyen, Hung T. Nguyen, and Vladik Kreinovich

Published in Leo Obrst and Inderjeet Mani (eds.), Proceedings of the Workshop on Semantic Approximation, Granularity, and Vagueness, A Workshop of the Seventh International Conference on Principles of Knowledge Representation and Reasoning KR'2000, Breckenridge, Colorado, April 11, 2000, pp. 51-56.

We show that Chu spaces, a new formalism used to describe parallelism and information flow, provide uniform explanations for different choices of fuzzy methodology, such as choices of fuzzy logical operations, of membership functions, of defuzzification, etc.

Original compressed PostScript file; Final version (compressed postscript)..


Technical Report UTEP-CS-00-06, January 2000
Updated version UTEP-CS-00-06a, March 2001
1st Order, 2nd Order, What Next? Do We Really Need Third-Order Descriptions: A View from a Realistic (Granular) Viewpoint
Vladik Kreinovich and Hung T. Nguyen

Published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 1908-1913.

To describe experts' uncertainty in a knowledge-based system, we usually use numbers from the interval [0,1] (subjective probabilities, degrees of certainty, etc.). The most direct way to get these numbers is to ask the expert; however, the expert may not be 100\% certain what exactly number describes his uncertainty; so, we end up with a second-order uncertainty - a degree of certainty describing to what extent a given number d adequately describes the expert's uncertainty about a given statement A. At first glance, it looks like we should not stop at this second order: the expert is probably as uncertain about his second-order degree as about his first-order one, so we need third order, fourth order descriptions, etc. In this paper, we show that from a realistic (granular) viewpoint, taking into consideration that in reality, an expert would best describe his degrees of certainty by a word from a finite set of words, it is sufficient to have a second-order description; from this viewpoint, higher order descriptions can be uniquely reconstructed from the second-order one, and in this sense, the second-order description is sufficient.

Original file in Compressed PostScript;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-00-05, January 2000
Discrete (Granular) Logics: A New (Natural) Notion of Continuity, with a Complete Description Of All Continuous Granular Logics
Hung T. Nguyen and Vladik Kreinovich

In most knowledge-based systems, the experts' uncertainty is described by a real number from the interval [0,1] (this number is called subjective probability, degree of certainty, etc.). However, experts usually use a small finite set of words to describe their degree of unecratinty; thus, to adequately describe the expert's optinion, it is desirable to use a finite (granular) logic. If all we know about the expert's opinion on two statements A and B is this expert's degrees of certainty d(A) and d(B) in these two statements, and the user asks a query "A and B?", then we need to estimate the degree d(A and B) based on the given values d(A) and d(B). In this paper, we formalize the natural demand that gradual changes in d(A) and d(B) must lead to gradual changes in our estimate for d(A and B) (we called it continuity). We show that the only continuous "and"-operation is min(a,b). Likewise, the only continuous "or"-operation is max(a,b), the only continuous "not"-operation corresponds to f(a)=1-a, etc.

Compressed PostScript file.


Technical Report UTEP-CS-00-04, January 2000
Choosing a Physical Model: Why Symmetries?
Raul A. Trejo, Vladik Kreinovich, and Luc Longpre

Published in Bulletin of the European Association for Theoretical Computer Science EATCS, 2000, Vol. 70, pp. 159-161.

Compressed PostScript file; PDF file.


Technical Report UTEP-CS-00-03, January 2000
Aircraft Integrity and Reliability
Carlos Ferregut, Roberto A. Osegueda, Yohans Mendoza, Vladik Kreinovich, and Timothy J. Ross

Published in: Jane Booker, Jerry Parkinson, and Timothy R. Ross (eds.), Combined Fuzzy Logic and Probability Applications, SIAM, Philadelphia, 2000, pp. 219-242.

In his recent paper "Probability theory needs an infusion of fuzzy logic to enhance its ability to deal with real-world problems", L. A. Zadeh explains that probability theory needs an infusion of fuzzy logic to enhance its ability to deal with real-world problems. In this paper, we give an example of a real-world problem for which such an infusion is indeed successful: the problems of aircraft integrity and reliability.

Compressed PostScript file; PDF file.


Technical Report UTEP-CS-00-02, January 2000
Intelligent Mining in Image Databases, with Applications to Satellite Imaging and to Web Search
Stephen Gibson, Vladik Kreinovich, Luc Longpre, Brian Penn, and Scott A. Starks

Published in: A. Kandel, H. Bunke, and M. Last (eds.), Data Mining and Computational Intelligence, Springer-Verlag, Berlin, 2001, pp. 309-336.

An important part of our knowledge is in the form of images. For example, a large amount of geophysical and environmental data comes from satellite photos, a large amount of the information stored on the Web is in the form of images, etc. It is therefore desirable to use this image information in data mining. Unfortunately, most existing data mining techniques have been designed for mining numerical data and are thus not well suited for image databases. Hence, new methods are needed for image mining. In this paper, we show how data mining can be used to find common patterns in several images.

Original file in Compressed PostScript;
Final version in Compressed Postscript and in pdf


Technical Report UTEP-CS-00-01, January 2000
Updated version UTEP-CS-00-01a, June 2000
Some Practical Applications of Soft Computing and Data Mining
Hung T. Nguyen, Nadipuram R. Prasad, Vladik Kreinovich, and Habib Gassoumi

Published in: A. Kandel, H. Bunke, and M. Last (eds.), Data Mining and Computational Intelligence, Springer-Verlag, Berlin, 2001, pp. 273-307.

Traditional data mining techniques mainly deal with a search for patterns in traditional databases, where data consists of numbers and words. In many application areas, however, data is more complicated: real-life data is often obtained as an image from a camera rather than a few measurements. Furthermore, this image can also change dynamically. In this paper, we present several examples of how soft computing is related to mining such data.

Original compressed PostScript file; Final version (compressed postscript)..