University of Texas at El Paso
Computer Science Department
Abstracts of 2001 Reports


Technical Report UTEP-CS-01-33, November 2001
Representation, Elicitation, and Aggregation of Uncertainty in Risk Analysis - from Traditional Probabilistic Techniques to More General, More Realistic Approaches: A Survey
Scott Ferson and Vladik Kreinovich

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-32, October 2001
Constrained Fuzzy Arithmetic
Vladik Kreinovich, Mirko Navara, and Zdenek Zabokrtsky

Published in: P. Hajek (ed.), Proceedings of the Workshop on Soft Computing SOFSEM'2001, Piestany, Slovakia, November 29-30, 2001, pp. 1-3.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-31, October 2001
Aerospace Applications of Intervals: From Geospatial Data Processing to Fault Detection in Aerospace Structures
Vladik Kreinovich and Scott A. Starks

Published in International Journal of Uncertainty, Fuzziness, and Knowledge Based Systems, 2001, Vol. 9, No. 6, pp. 721-730.

This paper presents a brief introduction into interval computations and their use in aerospace applications.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-30, October 2001
Non-Associative Operations
Bernadette Bouchon-Meunier, Vladik Kreinovich, and Hung T. Nguyen

Published in the Proceedings of the Second International Conference on Intelligent Technologies InTech'2001, Bangkok, Thailand, November 27-29, 2001, pp. 39-46.

How is fuzzy logic usually formalized? There are many seemingly reasonable requirements that a logic should satisfy: e.g., since A&B and B&A are the same, the corresponding and-operation should be commutative. Similarly, since A&A means the same as A, we should expect that the and-operation should also satisfy this property, etc. It turns out to be impossible to satisfy all these seemingly natural requirements, so usually, some requirements are picked as absolutely true (like commutativity or associativity), and others are ignored if they contradict to the picked ones. This idea leads to a neat mathematical theory, but the analysis of real-life expert reasoning shows that all the requirements are only approximately satisfied. we should require all of these requirements to be satisfied to some extent. In this paper, we show the preliminary results of analyzing such operations. In particular, we show that non-associative operations explain the empirical 7+-2 law in psychology according to which a person can normally distinguish between no more than 7 plus minus 2 classes.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-29, October 2001
Interval Mathematics: Algebraic Aspects
Sompong Dhompongsa, Vladik Kreinovich, and Hung T. Nguyen

Published in the Proceedings of the Second International Conference on Intelligent Technologies InTech'2001, Bangkok, Thailand, November 27-29, 2001, pp. 30-38.

Many application-oriented mathematical models deal with real numbers. In real life, due to the inevitable measurement inaccuracy, we do not know the exact values of the measured quantities, we know, at best, the intervals of possible values. It is thus desirable to analyze how the corresponding mathematical results will look if we replace numbers by intervals.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-28, October 2001
A Statistical Analysis for Rule Base Reduction
Vladik Kreinovich, Claude Langrand, and Hung T. Nguyen

Published in the Proceedings of the Second International Conference on Intelligent Technologies InTech'2001, Bangkok, Thailand, November 27-29, 2001, pp. 47-52.

When we take into account more input variables in a control system, the number of rules grows exponentially. To decrease the number of rules, we propose not to explicitly state the control for every combination of input variables, but to use the "otherwise" clause. In this paper, we provide a simple statistical analysis of the resulting reduction and show, on a case study, that this reduction can indeed be drastic.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-27, October 2001
Combining Fuzzy and Probabilistic Knowledge Using Belief Functions
Vladik Kreinovich, Claude Langrand, and Hung T. Nguyen

Published in: Proceedings of the Second Vietnam-Japan Bilateral Symposium on Fuzzy Systems and Applications VJFUZZY'2001, Hanoi, December 7-8, 2001, pp. 191-198.

Some knowledge comes in probabilistic terms, some in fuzzy terms. These formalisms are drastically different, so it is difficult to combine the corresponding knowledge. A natural way to combine fuzzy and probabilistic knowledge is to find a formalism which enables us to express both types of knowledge, and then to use a combination rule from this general formalism. In this paper, as such a formalism, we propose to use belief functions. For the case when the universe of discourse is the set of all real numbers, we derive new explicit easy-to-compute analytical formulas for the resulting combination.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-26, October 2001
How to Interpret Neural Networks in Terms of Fuzzy Logic?
Sompong Dhompongsa, Vladik Kreinovich, and Hung T. Nguyen

Published in: Proceedings of the Second Vietnam-Japan Bilateral Symposium on Fuzzy Systems and Applications VJFUZZY'2001, Hanoi, December 7-8, 2001, pp. 184-190.

Neural networks are a very efficient learning tool, e.g., for transforming an experience of an expert human controller into the design of an automatic controller. It is desirable to reformulate the neural network expression for the input-output function in terms most understandable to an expert controller, i.e., by using words from natural language. There are several methodologies for transforming such natural-language knowledge into a precise form; since these methodologies have to take into consideration the uncertainty (fuzziness) of natural language, they are usually called fuzzy logics.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-25, August 2001
Updated version UTEP-CS-01-25a, May 2002
Interval Mathematics for Analysis of Multiresolutional Systems
Vladik Kreinovich and Richard Alo

Preliminary version appeared in: E. R. Messina and A. R. Meystel (eds.), Measuring the Performance and Intelligence of Systems: Proceedings of the 2001 PerMIS Workshop, Mexico City, September 4, 2001, NIST Publication 982, June 2002, pp. 37-48; final version published in Archives of Control Sciences, 2002, Vol. 12, No. 4, pp. 323-350.

The more complex the problem, the more complex the system necessary for solving this problem. For very complex problems, it is no longer possible to design the corresponding system on a single resolution level, it becomes necessary to have multiresolutional systems. When analyzing such systems -- e.g., when estimating their performance and/or their intelligence -- it is reasonable to use the multiresolutional character of these systems: first, we analyze the system on the low-resolution level, and then we sharpen the results of the low-resolution analysis by considering higher-resolution representations of the analyzed system. The analysis of the low-resolution level provides us with an approximate value of the desired performance characteristic. In order to make a definite conclusion, we need to know the accuracy of this approximation. In this paper, we describe interval mathematics -- a methodology for estimating such accuracy. The resulting interval approach is also extremely important for tessellating the space of search when searching for optimal control. We overview the corresponding theoretical results, and present several case studies.

Original file in Compressed PostScript and pdf;
Updated file in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-24, August 2001
Updated version UTEP-CS-01-24b, April 2002
On Efficient Representation of Expert Knowledge by Fuzzy Logic: Towards an Optimal Combination of Granularity and Higher-Order Approaches
Hung T. Nguyen and Vladik Kreinovich

Published in: Ajith Abraham, Lakhmi Jain, and Janusz Kacprzyk (eds.), Recent Advances in Intelligent Paradigms and Applications, Springer-Verlag, 2002, pp. 107-132.

A natural approach to designing an intelligent system is to incorporate expert knowledge into this system. One of the main approaches to translating this knowledge into computer-understandable terms is the approach of fuzzy logic. It has led to many successful applications, but in several aspects, the resulting computer representation is somewhat different from the original expert meaning. Two related approaches have been used to make fuzzy logic more adequate in representing expert reasoning: granularity and higher-order approaches. Each approach is successful in some applications where the other approach did not succeed so well; it is therefore desirable to combine these two approaches. This idea of combining the two approaches is very natural, but so far, it has led to few successful practical applications. In this paper, we provide results aimed at finding a better (ideally optimal) way of combining these approaches.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-23, July 2001
Second-Order Uncertainty as a Bridge Between Probabilistic and Fuzzy Approaches
Hung T. Nguyen, Vladik Kreinovich, and Luc Longpre

Published in Proceedings of the 2nd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'01, Leicester, England, September 5-7, 2001, pp. 410-413.

On the example of physics, we show that the traditional one-level description is not completely adequate. For a more adequate structure, a hierarchical description of uncertainty is necessary, which supplements the more traditional first-order uncertainty with second-order, third-order and more sophisticated models. In particular, the second-order approach seems to provide a bridge between probabilistic and fuzzy approaches to uncertainty.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-22, July 2001
Towards Fusing Sophisticated Mathematical Knowledge and Informal Expert Knowledge: An Arbitrary Metric Can Be Naturally Interpreted in Fuzzy Terms
Hung T. Nguyen, Vladik Kreinovich, and Witold Pedrycz

Published in Proceedings of the 2nd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'01, Leicester, England, September 5-7, 2001, pp. 406-409.

In many practical situations, we are faced with a necessity to combine sophisticated mathematical knowledge about the analyzed systems with informal expert knowledge. To make this combination natural, it is desirable to reformulate the abstract mathematical knowledge in understandable intuitive terms. In this paper, we show how this can be done for an abstract metric.

One way to define a metric is to pick certain properties P1, ..., Pn, and to define a similarity between two objects x and y as the degree to which P1(x) is similar to P1(y) and P2(x) is similar to P2(y), etc.

Similarity is naturally described by 1-|d1-d2| (we can use robustness arguments to get this expression). Since we can have infinitely many properties, we should use min for "and". The distance is then 1-this similarity. The resulting metrics are "natural".

It seems, at first glance, that not all metrics are natural in this sense. Interestingly, an arbitrary continuous metric can be thus described.

Similarly, we can thus describe all "kinematic metrics" (space-time analogues of metrics), while probabilistic explanation is difficult.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-21, July 2001
Reduction to Independent Variables: From Normal Distribution to General Statistical Case to Fuzzy
Mourad Oussalah, Hung T. Nguyen, and Vladik Kreinovich

Published in Proceedings of the 2nd Conference of the European Society for Fuzzy Logic and Technology EUSFLAT'01, Leicester, England, September 5-7, 2001, pp. 402-405.

In many practical problems, we must combine ("fuse") data represented in different formats, e.g., statistical, fuzzy, etc. The simpler the data, the easier to combine them. Therefore, to combine complex data, it is desirable to "decompose" this complex data into simpler (easy-to-combine) data chunks.

It is well known that when we have n random variables x1, ..., xn with a joint Gaussian distribution, then we can reduce them to n independent variables by an appropriate linear transformation x1, ..., xn --> y1 = f1(x1,...,xn), ..., yn = fn(x1,...,xn). It is not so well known but also true that when we have x1, ..., xn with a known joint probability distribution (not necessarily Gaussian), then we can always reduce them to n independent variables by an appropriate non-linear transformation. In this paper, we show that a similar result holds for fuzzy uncertainty as well.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-20a, January 2002
Automatic Referencing of Multi-Spectral Images
Roberto Araiza, Hongjie Xie, Scott A. Starks, and Vladik Kreinovich

Piblished in the Proceedings of the IEEE Southwest Symposium on Image Analysis and Interpretation, Santa Fe, New Mexico, USA, April 7-9, 2002, pp. 21-25.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-20, June 2001
Automatic Referencing of Satellite and Radar Images
Sreenath Srikrishnan, Roberto Araiza, Hongjie Xie, Scott A. Starks, and Vladik Kreinovich

Published in Proceedings of the 2001 IEEE Systems, Man, and Cybernetics Conference, Tucson, Arizona, October 7-10, 2001, pp. 2176-2181.

In order to adequately process satellite and radar information, it is necessary to find the exact correspondence between different types of images and between these images and the existing maps. In other words, we need to reference these images. In this paper, we propose new methods for automatic referencing of satellite and radar images.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-19, June 2001
2-D Analogues of Allen Interval Algebra for Image Analysis: Towards Justification
Scott A. Starks, Dima Iourinski, and Vladik Kreinovich

Published in Proceedings of the 2001 IEEE Systems, Man, and Cybernetics Conference, Tucson, Arizona, October 7-10, 2001, pp. 2182-2186.

In reasoning about time and duration, researchers often use Allen's Interval Algebra. This algebra describes possible relations between 1-D intervals. An interval can precede the other one, follow the other one, start the other one, etc. This algebra describes the relationship between different intervals in terms of words from natural language. To give a natural language description of 2D images, it is desirable to develop a similar approach for describing the relationship between 2-D objects in a picture. In their recent papers, Jim Keller and his collaborators proposed a new approach based on a simulation of a ``force" between these objects. In this paper, we show that their force formula is theoretically optimal.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-18a, October 2002
Towards More Realistic (e.g., Non-Associative) And- and Or-Operations in Fuzzy Logic
Vladik Kreinovich

Published in Soft Computing, 2004, Vol. 8, No. 4, pp. 274-280.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-18, June 2001
Towards More Realistic (e.g., Non-Associative) And- and Or-Operations in Fuzzy Logic
Jesus Martinez, Leopoldo Macias, Ammar Esper, Jesus Chaparro, Vick Alvarado, Scott A. Starks, and Vladik Kreinovich

Published in Proceedings of the 2001 IEEE Systems, Man, and Cybernetics Conference, Tucson, Arizona, October 7-10, 2001, pp. 2187-2192.

How is fuzzy logic usually formalized? There are many seemingly reasonable requirements that a logic should satisfy: e.g., since A&B and B&A are the same, the corresponding and-operation should be commutative. Similarly, since A&A means the same as A, we should expect that the and-operation should also satisfy this property, etc. It turns out to be impossible to satisfy all these seemingly natural requirements, so usually, some requirements are picked as absolutely true (like commutativity or associativity), and others are ignored if they contradict to the picked ones.

This idea leads to a neat mathematical theory, but the analysis of real-life expert reasoning shows that all the requirements are only approximately satisfied. we should require all of these requirements to be satisfied to some extent. In this paper, we show the preliminary results of analyzing such operations. In particular, we show that non-associative operations explain the empirical 7+-2 law in psychology according to which a person can normally distinguish between no more than 7 plus minus 2 classes.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-17a, June 2001
Towards Automatic Detection of Erroneous Measurement Results in a Gravity Database
Qian Wen, Ann Q. Gates, Jan Beck, Vladik Kreinovich, and G. Randy Keller

Published in Proceedings of the 2001 IEEE Systems, Man, and Cybernetics Conference, Tucson, Arizona, October 7-10, 2001, pp. 2170-2175.

Geospatial databases often contain erroneous measurements. For some such databases such as gravity databases, the known methods of detecting erroneous measurements -- based on regression analysis -- do not work well. As a result, to clean such databases, experts use manual methods which are very time-consuming. In this paper, we propose a (natural) "localized" version of regression analysis as a technique for automatic cleaning. We illustrate the efficiency of this technique on the example of the gravity database.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-17, June 2001
Localized Regression Analysis as a Method for Detecting Erroneous Measurements in Geospatial Databases, with Application to Gravity Databases
Qian Wen, Nigel Hicks, G. Randy Keller, Ann Q. Gates, and Vladik Kreinovich

Geospatial databases generally consist of measurements related to points (or pixels in the case of raster data), lines, and polygons. In recent years, the size and complexity of these databases have increased significantly and they often contain erroneous measurements or noise. In this paper, we address the problem of detecting erroneous and suspicious values in a database consisting of point measurements. We use a database of measurements of anomalies in the Earth's gravity field that we have complied as a test case, and we found that the standard methods of detecting erroneous measurements - based on regression analysis - do not work well. As a result, experts use manual methods to clean such databases that are very time-consuming. In this paper, we propose a (natural) "localized" version of regression analysis as a technique for automatic cleaning of the database and illustrate its efficiency in the case of this gravity database. We believe that this approach will prove to be useful when dealing with many other types of point data.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-16, May 2001
Can Computers Do the Job of Nobelist Physicists? Planck Formula Revisited
Richard Alo, Raul Trejo, and Vladik Kreinovich

Published in Proc. of the 2001 International Conference on Information Technology for the New Millennium IConIT'2001, Bangkok, Thailand, May 28-30, 2001, pp. 284-295.

There exist several computer programs which successfully model the discovery process in science. There are successful expert systems in medicine and other areas. But one area is a real challenge for such systems: theoretical physics. The most advanced knowledge discovery programs (like BACON written under the supervision of the Nobelist Herbert A. Simon) successfully reproduce only 17, 18, and 19 century physics, but stop short of explaining the very first formula of the 20 century: Planck's law of black body radiation. This law, discovered by an insight, led to the modern Quantum Physics. The programs stop short not because the computers are not fast enough: as Simon emphasized, we need new ideas -- not only new computers.

In the present paper, we present the natural symmetry ideas which lead directly to Planck's formula. Possible other applications of these ideas are discussed.

Compressed PostScript file, pdf file


Technical Report UTEP-CS-01-15, May 2001
Assessing the Predictive Accuracy of Complex Simulation Models
Timothy Ross, Vladik Kreinovich, and Cliff Joslyn

Published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 2008-2012.

pdf file


Technical Report UTEP-CS-01-14, April 2001
Updated version UTEP-CS-01-14a, September 2001
A New Derivation of Centroid Defuzzification
Mourad Oussalah, Hung T. Nguyen, and Vladik Kreinovich

Published in Proceedings of the 10th IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2001, Melbourne, Australia, December 2-5, 2001, Vol. 2, pp. 884-887.

We describe a new symmetry-based derivation of centroid defuzzification.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-13, April 2001
Updated version UTEP-CS-01-13a, September 2001
Hyperbolic Approach to Fuzzy Control is Optimal
Hung T. Nguyen, Vladik Kreinovich, Michael Margaliot, and Gideon Langholtz

Published in Proceedings of the 10th IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2001, Melbourne, Australia, December 2-5, 2001, Vol. 2, pp. 888-891.

In a series of papers and a book, M. Margaliot and G. Langholz proposed a hyperbolic approach to fuzzy control, in which they apply a certain hyperbolic non-linear transformation to the original variables. In this paper, we consider all possible non-linear transformations of this type and show that this hyperbolic transformation is indeed optimal.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-12, April 2001
Updated version UTEP-CS-01-12a, September 2001
Discrete (Set) Derivatives and "Algebraic" Fuzzy Logic Operations
Bernadette Bouchon-Meunier, Hung T. Nguyen, and Vladik Kreinovich

Published in Proceedings of the 10th IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2001, Melbourne, Australia, December 2-5, 2001, Vol. 1, pp. 420-423.

We propose a new way to generalize logical operations from the discrete classical logic to a continuous fuzzy logic; namely, we propose to define derivatives for discrete case, and then to use these derivatives to derive the continuous operations. We show that this natural approach leads to "algebraic" fuzzy operations a*b and a+b-a*b.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-11, April 2001
Updated version UTEP-CS-01-11a, September 2001
Logic-Motivated Choice of Fuzzy Logic Operators
Pratit Santiprabhob, Hung T. Nguyen, Witold Pedrycz, and Vladik Kreinovich

Published in Proceedings of the 10th IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2001, Melbourne, Australia, December 2-5, 2001, Vol. 2, pp. 646-649.

Many different "and"- and "or"-operations have been proposed for use in fuzzy logic; it is therefore important to select, for each particular application, the operations which are the best for this particular application. Several papers discuss the optimal choice of "and"- and "or"-operations for fuzzy control, when the main criterion is to get the stablest control (or the smoothest or the most robust or the fastest-to-compute). In reasoning applications, however, it is more appropriate to select operations which are the best in reflecting human reasoning, i.e., operations which are "the most logical". In this paper, we explain how we can use logic motivations to select fuzzy logic operations, and show the consequences of this choice. As one of the unexpected consequences, we get a surprising relation with the entropy techniques, well known in probabilistic approach to uncertainty.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-10, April 2001
Updated version UTEP-CS-01-10a, April 2001
Which Truth Values in Fuzzy Logics are Definable?
Hung T. Nguyen, Vladik Kreinovich, and Antonio Di Nola

Short version published in: Proceedings of the NEW New Paradigms for the New Millennium Conference, Caserta, Italy, September 13-15, 2001; full paper published in International Journal of Intelligent Systems, 2003, Vol. 18, No. 10, pp. 1057-1064.

In fuzzy logic, every word or phrase describing uncertainty is represented by a real number from the interval [0,1]. There are only denumerable many words and phrases, and continuum many real numbers; thus, not every real number corresponds to some commonsense degree of uncertainty. In this paper, for several fuzzy logic, we describe which numbers are describing such degrees, i.e., in mathematical terms, which real numbers are definable in the corresponding fuzzy logic.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-09, March 2001
Revised version UTEP-CS-01-09b, November 2005
Optimal Choice of Granularity in Commonsense Estimation: Why Half-Orders of Magnitude
Jerry R. Hobbs and Vladik Kreinovich

Short version published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 1343-1348; full paper published in International Journal on Intelligent Systems, 2006, Vol. 21, No. 8, pp. 843-855.

It has been observed that when people make crude estimates, they feel comfortable choosing between alternatives which differ by a half-order of magnitude (e.g., were there 100, 300, or 1,000 people in the crowd), and less comfortable making a choice on a more detailed scale, with finer granules, or on a coarser scale (like 100 or 1,000). In this paper, we describe two models of choosing granularity in commonsense estimates, and we show that for both models, in the optimal granularity, the next estimate is 3-4 times larger than the previous one. Thus, these two optimization results explain the commonsense granularity.

Original file in Compressed PostScript and pdf;
Revised file in Compressed PostScript and pdf


Technical Report UTEP-CS-01-08, March 2001
How to Make Sure that "~100" + 1 is ~100 in Fuzzy Arithmetic: Solution and its (Inevitable) Drawbacks
Vladik Kreinovich, Hung T. Nguyen, and Witold Pedrycz

Published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 1653-1658.

From the commonsense viewpoint, if a person who weighs around 100 kilograms gains one more kilogram, his weight is still around 100 kilograms. Alas, not so in traditional fuzzy arithmetic. In this paper, we propose a modification of fuzzy arithmetic which does have this property. We gain the desired property, but there is no free lunch, we have to lose two important properties of the traditional fuzzy arithmetic: first, addition is no longer always associative; second, addition is no longer always easily computable.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-01-07, March 2001
Why Unary and Binary Operations in Logic: General Result Motivated by Interval-Valued Logics
Hung T. Nguyen, Vladik Kreinovich, and I. R. Goodman

Published in the Proceedings of the Joint 9th World Congress of the International Fuzzy Systems Association and 20th International Conference of the North American Fuzzy Information Processing Society IFSA/NAFIPS 2001, Vancouver, Canada, July 25-28, 2001, pp. 1991-1996.

Traditionally, in logic, only unary and binary operations are used as basic ones - e.g., "not", "and", "or" - while the only ternary (and higher order) operations are the operations which come from a combination of unary and binary ones. For the classical logic, with the binary set of truth values {0,1}, the possibility to express an arbitrary operation in terms of unary and binary ones is well known: it follows, e.g., from the well known possibility to express an arbitrary operation in DNF form. A similar representation result for [0,1]-based logic was proven in our previous paper. In this paper, we expand this result to finite logics (more general than classical logic) and to multi-D analogues of the fuzzy logic - both motivated by interval-valued fuzzy logics.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-01-06, February 2001
On Fusion of Soft and Hard Computing: Traditional ("Hard Computing") Optimal Rescaling Techniques Simplify Fuzzy Control
Hugh F. VanLandingham and Vladik Kreinovich

One of the main objectives of fuzzy control is to translate expert rules - formulated in imprecise ("fuzzy") words from natural language - into a precise control strategy. This translation is usually done is two steps. First, we apply a fuzzy control methodology to get a rough approximation to the expert's control strategy, and then we tune the resulting fuzzy control system. The first step (getting a rough approximation) is well-analyzed, and the fact that we have expert's intuitive understanding enables us to use soft computing techniques to perform this step. The second (tuning) step is much more difficult: we no longer have any expert understanding of which tuning is better, and therefore, soft computing techniques are not that helpful. In this paper, we show that we can formulate an important particular case of the tuning problem as a traditional optimization problem and solve it by using traditional ("hard computing") techniques. We show, on a practical industrial control example, that the resulting fusion of soft computing (for a rough approximation) and a hard computing (for tuning) leads to a high quality control.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-01-05, February 2001
What is the Best Way to Draw a Cube? a Hypercube?
Brian d'Auriol, Vladik Kreinovich, Bindu George, Florence Muganda, and Pramod Kumar Chikkapaiah

Published in Geombinatorics, 2002, Vol. 11, No. 4, pp. 105-110.

One of the possible connections between processors is a hypercube. The simplest case of a hypercube - a 4-vertex square - can be naturally represented on a 2-D page. To represent a 3-dimensional (or higher-dimensional) hypercube, we must project additional dimensions onto a 2-D page. In general, when we project a multi-D space into a 2-D plane, different points project into the same one. To get the best visualization, we must select a projection in such a way that the projections of different points are as distant from each other as possible. In this paper, we formalize and solve the corresponding optimization problem. Thus, we show what is the best way of drawing a cube.

Compressed PostScript file, pdf file.


Technical Report UTEP-CS-01-04, January 2001
Updated version UTEP-CS-01-04a, May 2001
Final version UTEP-CS-01-04c, August 2002
From [0,1]-Based Logic to Interval Logic (From Known Description of All Possible [0,1]-Based Logical Operations to a Description of All Possible Interval-Based Logical Operations)
Hung T. Nguyen and Vladik Kreinovich

Published in Notes on Intuitionistic Fuzzy Sets (NIFS), 2002, Vol. 8, No. 3, pp. 75-94.

Since early 1960s, we have a complete description of all possible [0,1]-based logical operations, namely of "and"-operations (t-norms) and of "or"-operations (t-conorms). In some real-life situations, intervals provide a more adequate way of describing uncertainty, so we need to describe interval-based logical operations. Usually, researchers followed a pragmatic path and simply derived these operations from the [0,1]-based ones. From the foundational viewpoint, it is desirable not to a priori restrict ourselves to such derivative operations but, instead, to get a description of all interval-based operations which satisfy reasonable properties.

Such description is presented in this paper. It turns out that all such operations can be described as the result of applying interval computations to the corresponding [0,1]-based ones.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf;
Final version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-03, January 2001
Updated version UTEP-CS-01-03a, August 2001
Theoretical Justification of a Heuristic Subbox Selection Criterion
Vladik Kreinovich and Tibor Csendes

Published in Central European Journal of Operations Research CEJOR, 2001, Vol. 9, No. 3, pp. 255-265.

The most widely used guaranteed methods for global optimization are probably the interval-based branch-and-bound techniques. In these techniques, we start with a single box - the entire function domain - as a possible location of the global minimum, and then, of each step, subdivide some of the boxes, use interval computations to compute the enclosure [F-(X),F+(X)] of the range f(X) of the objective function f(x) on each new sub-box X, and, based on these computations, eliminate the boxes which cannot contain the global minimum. The computational efficiency of these methods strongly depends on which boxes we select for sub-division. Traditionally, for sub-division, the algorithms selected a box with the smallest value of F-(X). Recently, it was shown that the algorithm converges much faster if we select, instead, a box with the largest possible value of the ratio (f-F-(X))/ (F+(X)-F-(X)), where f is a current upper bound on the actual global minimum. In this paper, we give a theoretical justification for this empirical criterion. Namely, we show that:

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-02, January 2001
Updated version UTEP-CS-01-02a, June 2001
Updated version UTEP-CS-01-02d, July 2001
Computational Complexity of Optimization and Crude Range Testing: A New Approach Motivated by Fuzzy Optimization
G. William Walster and Vladik Kreinovich

Published in Fuzzy Sets and Systems, 2003, Vol. 135, No. 1, pp. 179-208; correction 2004, Vol. 141, p. 163.

It is often important to check whether the maximum max f of a given function f on a given set B is smaller than a given number C. This "crude range testing" (CRT) problem is one of the most important problems in the practical application of interval analysis. Empirical evidence shows that the larger the difference C - max f, the easier the test. In general, the fewer global maxima, the easier the test; and finally, the further away global maxima are from each other, the easier the test. Using standard complexity theory to explain these empirical observations fails because the compared CRT problems are all NP-hard. In this paper the analysis of fuzzy optimization is used to formalize the relative complexity of different CRT problems. This new CRT-specific relative complexity provides a new and "robust" theoretical explanation for the above empirical observations. The explanation is robust because CRT relative complexity takes numerical inaccuracy into consideration. The new explanation is important because it is a more reliable guide than empirical observations to developers of new solutions to the CRT problem.

Original version in Compressed PostScript and pdf;
First updated version in Compressed PostScript and pdf;
Second updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-01e, April 2002
Updated version UTEP-01-01f, May 2003
Complexity and Approximation Studies of Finding Polynomially Bounded Length Plans for Temporal Goals
Chitta Baral, Vladik Kreinovich, Sudeshna Sarkar, Nam Tran, Raul Trejo, and Xin Zhang

In this paper, we consider the problem of planning with temporal goals, focussing on polynomially bounded length plans. Past results about complexity of planning are mostly about finding plans that take the world to one of several desired states, often described using a goal formula. We first consider goals expressed using linear temporal logic and analyze the complexity of planning with respect to such goals for both when the states in the trajectory are complete states, and when they are incomplete states. For the later case we also develop a notion of approximate planning and show its complexity to be lower. We also show that this notion of approximate planning is sound. We then consider goals that also have a knowledge component, and refer to such goals as knowledge temporal goals. We analyze the complexity of planning with respect to such goals, propose a notion of approximate planning which is sound and also analyze the complexity of such planning. Finally, we present several goals that can not be adequately expressed using linear temporal logics. To specify these goals, we propose the use of branching time temporal logics such as CTL and CTL*, and define what it means for a plan to satisfy such a goal. We then analyze the complexity of planning with such goals and identify a variant of such goals which leads to a lower complexity of planning.

Original version in Compressed PostScript and pdf;
updated version in Compressed PostScript and pdf.


Technical Report UTEP-CS-01-01, January 2001
Updated version UTEP-CS-01-01a, April 2001
Computational Complexity of Planning with Temporal Goals
Chitta Baral, Vladik Kreinovich, and Raul A. Trejo

Published in the Proceedings of the International Joint Conferences on Artificial Intelligence IJCAI'01, Seattle, Washington, August 4-10, 2001, pp. 509-514.

In the last decade, there has been several studies on the computational complexity of planning. These studies normally assume that the goal of planning is to make a certain fluent true after the sequence of actions. In many real-life planning problems, the goal is represented in a much more complicated temporal form: e.g., in addition to having a desired fluent true at the end, we may want to keep certain fluents true at all times. In this paper, we study the complexity of planning for such temporal goals. We show that for goals expressible in Linear Temporal Logic, planning has the same complexity as for non-temporal goals: it is NP-complete; and for goals expressible in a more general Branching Temporal Logic, planning is PSPACE-complete.

Original version in Compressed PostScript and pdf;
Updated version in Compressed PostScript and pdf.