University of Texas at El Paso
Computer Science Department
Abstracts of 1996 Reports


Technical Report UTEP-CS-96-49, December 1996
Updated version UTEP-CS-96-49a, December 1996
Fuzzy Rule Based Modeling as a Universal Approximation Tool
Vladik Kreinovich, George C. Mouzouris, and Hung T. Nguyen

Published in: H. T. Nguyen and M. Sugeno (eds.), Fuzzy Systems: Modeling and Control, Kluwer, Boston, MA, 1998, pp. 135-195.

Fuzzy control methodology translates fuzzy control rules, i.e., rules formulated by experts in (fuzzy) terms of natural language, into a precise control strategy. In many real-life situations, fuzzy control methodology leads to a high quality control strategy. These successes make fuzzy control a very promising control technique. In some applications, however, we not only want a promising technique, i.e., a technique that will, most probably, work; we want techniques that are, potentially, guaranteed to work. The question therefore appears: can we guarantee that fuzzy control will always work? In more precise terms, can we always guarantee that an arbitrary control strategy can be obtained, within an arbitrary accuracy, by applying a fuzzy control technique to appropriate expert rules? In other words, is fuzzy control a universal control tool? It turns out that in many applications, fuzzy control is indeed universal. This paper presents an overview of the corresponding results and open problems.

Original file and updated version are in LaTeX, use crckapb.sty.
latest version in compressed Postscript and in pdf.


Technical Report UTEP-CS-96-48b, December 1996
Interval-Valued Fuzzy Control in Space Exploration
Hung T. Nguyen and Vladik Kreinovich

Published in BUlletin for Studies and Exchanges on Fuzziness and its AppLications (BUSEFAL), No. 71, July 1997, pp. 55-64.

This paper is a short overview of our NASA-supported research into the possibility of using interval-based intelligent control techniques for space exploration.

File in LaTeX, uses IEEEtran.sty, and in pdf


Technical Report UTEP-CS-96-48a, December 1996
Intelligent Control in Space Exploration: What Non-Linearity to Choose?
Hung T. Nguyen and Vladik Kreinovich

Published in BUlletin for Studies and Exchanges on Fuzziness and its AppLications (BUSEFAL), No. 70, April 1997, pp. 61-69.

This paper is a short overview of our NASA-supported research into the optimal choice of fuzzy control techniques for space exploration problems.

File in LaTeX, uses IEEEtran.sty, and in pdf


Technical Report UTEP-CS-96-48, December 1996
Intelligent Control in Space Exploration: Interval Computations are Needed
Hung T. Nguyen and Vladik Kreinovich

Published in ACM SIGNUM Newsletter, 1997, Vol. 32, No. 1/2, pp. 17-36.

This paper is a short overview of our NASA-supported research into the necessity of using interval-based intelligent control techniques for space exploration. This is a reasonably new application area for interval computations, and we hope that the overviewed results and problems will be of some interest to the numerical computations community.

This paper is a short version of our detailed report presented to NASA.

File in LaTeX, uses IEEEtran.sty, and in pdf.


Technical Report UTEP-CS-96-47, November 1996
From Interval Computations to Modal Mathematics: Applications and Computational Complexity
Bernadette Bouchon-Meunier and Vladik Kreinovich

Published in: ACM SIGSAM Bulletin, 1998, Vol. 32, No. 2, pp. 7-11.

File in LaTeX.


Technical Report UTEP-CS-96-46, November 1996
Natural Requirements for Interval Roundings Lead to a Hardware-Independent Characterization of Standard Rounding Procedures
T. E. Kaminsky and V. Kreinovich

Notes on Intuitionistic Fuzzy Sets (NIFS), 1998, Vol. 4, No. 2, pp. 57-64.

Roundings are usually described in an ad hoc manner, by simply describing the way rounding off is done in the actual computer. It is desirable to have a more abstract, hardware-independent description of possible roundings. In this paper, we propose an axiomatic description of rounding procedures. It turns out that reasonable axioms uniquely determine the roundings that are actually used in the existing computers.

File in LaTeX.


Technical Report UTEP-CS-96-45, November 1996
Updated version UTEP-CS-96-45a, January 1997
Updated version UTEP-CS-96-45c, February 2017
QFT + NP = P
Quantum Field Theory (QFT): A Possible Way of Solving
NP-Complete Problems in Polynomial Time
Adriana Beltran, Vladik Kreinovich, and Luc Longpre

To appear in: Aboul Ella Hassanien, Mohamed Elhoseny, Ahmed Farouk, and Janusz Kacprzyk (eds.), Quantum Computing: an Environment for Intelligent Large Scale Real Application, Springer Verlag

It has been recently theoretically shown that the dependency of some (potential observable) quantities in quantum field theory (QFT) on the parameters of this theory is discontinuous. This discovery leads to the theoretical possibility of checking whether the value of a given physical quantity is equal to 0 or different from 0 (here, theoretical means that this checking requires very precise measurements and because of that, this conclusion has not yet been verified by a direct experiment).

This result from QFT enables us to do what we previously could not: check whether two computable real numbers are equal or not. In this paper, we show that we can use this ability to solve NP-complete problems in polynomial time.

Specifically, we will introduce a new model of computation. This new model is based on solid mainstream physics (namely, on quantum field theory). It is capable of solving NP-complete problems in polynomial time.

Original file in LaTeX, uses IEEEtran.sty;
updated version in LaTeX.
Updated version UTEP-CS-96-45c in pdf


Technical Report UTEP-CS-96-44, November 1996
Nonstandard (Non-Sigma-Additive) Probabilities
in Algebraic Quantum Field Theory
Vladik Kreinovich and Luc Longpre

Published in International Journal of Theoretical Physics, 1997, Vol. 36, No. 7, pp. 1601-1615.

Traditionally, physicists deduce the observational (physical) meaning of probabilistic predictions from the implicit assumption that the well-defined events whose probabilities are 0 never occur. For example, the conclusion that in a potentially infinite sequence of identical experiments with probability 0.5 (like coin tossing), the frequency of Heads tends to 0.5, follows from the theorem that sequences for which the frequencies do not tend to 0.5 occur with probability 0. Similarly, the conclusion that in quantum mechanics, measuring a quantity always results in a number from its spectrum is justified by the fact that the probability of getting a number outside the spectrum is 0.

In the mid-60's, a consistent formalization of this assumption was proposed by Kolmogorov and Martin-Lof, who defined a random element of a probability space as an element that does not belong to any definable set of probability 0 (definable in some reasonable sense). This formalization is based on the fact that traditional probability measures are sigma-additive, i.e., that the union of countably many sets of probability 0 has measure 0.

In quantum mechanics with infinitely many degrees of freedom (e.g., in quantum field theory), and in statistical physics, one must often consider non-sigma-additive measures, for which the Martin-Lof's definition does not apply. Many such measures can be defined as "limits" of standard probability distributions. In this paper, we formalize the notion of a random element for such finitely-additive probability measures, and thus, explain the observational (physical) meaning of such probabilities.

File in LaTeX, in Compressed PostScript, and in pdf.


Technical Report UTEP-CS-96-43, October 1996
From Numerical Intervals to Set Intervals
(Interval-related results presented at
the First International Workshop
on Applications and Theory of Random Sets)
Hung T. Nguyen and Vladik Kreinovich

Published in Reliable Computing, 1997, Vol. 3, No. 1, pp. 95-102.

File in LaTeX.


Technical Report UTEP-CS-96-42, October 1996
Updated version UTEP-CS-96-42a, March 1997
Second updated version UTEP-CS-96-42b, April 1999
Third updated version UTEP-CS-96-42c, January 2002
On the Solution Set of Particular Classes of Linear Interval Systems
Goetz Alefeld, Vladik Kreinovich, and Guenter Mayer

Published in Journal of Computational and Applied Mathematics, 2003, Vol. 152, pp. 1-15.

We characterize the solution set S of real linear systems Ax=b by a set of inequalities if b lies between some given bounds b-, b+ and if the n x n coefficient matrix A varies similarly between two bounds A- and A+. In addition, we restrict A to a particular class of matrices, for instance the class of the symmetric, the skew-symmetric, the per-symmetric, the Toeplitz, and the Hankel matrices, respectively. In this way we generalize the famous Oettli-Prager criterion as well as the results from our previous papers.

Original file in LaTeX;
First updated version in LaTeX, uses psfig.sty and sym_p_f.eps.
Second updated version in Compressed PostScript and in pdf.
Third updated version in Compressed PostScript and in pdf.


Technical Report UTEP-CS-96-41, October 1996
Fuzzy Implication Can Be Arbitrarily Complicated:
A Theorem
Francisco G. Fernandez and Vladik Kreinovich

Published in International Journal of Intelligent Systems, 1998, Vol. 13, No. 5, pp. 445-451.

In fuzzy logic, there are several methods of representing implication in terms of "and", "or", and "not"; in particular, explicit representations define a class of S implications, implicit representations define a class of R implications. Some reasonable implication operations have been proposed, such as Yager's ab, that are difficult to represent as S or R implications. For such operations, a new class of representations has been recently proposed called A implications, for which the relationship between implications and the basic operations "and", "or", and "not" is even more complicated.

A natural question is: is this complexity really necessary? In other words, is it true that A operations cannot be described as S or R ones, or they can, but we simply have not found these representations?

In this paper, we show that yes, the complexity is necessary, because there are operations that cannot be represented in a simpler form.

File in LaTeX.


Technical Report UTEP-CS-96-40, September 1996
Updated version UTEP-CS-96-40a, April 1997
Geometric Approach to Quark Confinement
Vladik Kreinovich and Piotr Wojciechowski

Published in Geombinatorics, 1997, Vol. 7, No. 1, pp. 9-24.

Interactions that explain quantum confinement correspond, in the first approximation, to interactions in 2-D space-time. In this paper, we show that this fact has a fundamental explanation: Namely,

Original file in LaTeX,
Updated version in plain TeX,
, in Compressed PostScript and in pdf.


Updated version UTEP-CS-96-39b, October 1996
Astrogeometry: Towards Mathematical Foundations
Andrei Finkelstein, Olga Kosheleva, and Vladik Kreinovich

Published in International Journal of Theoretical Physics, 1997, Vol. 36, No. 4, pp. 1009-1020.

In image processing (e.g., in astronomy), the desired black-and-white image is, from the mathematical viewpoint, a set. So, we need to process sets. To define a generic set, we need infinitely many parameters; therefore, if we want to represent and process sets in the computer, we must restrict ourselves to finite-parametric families of sets that will be used to approximate the desired sets. The wrong choice of a family can lead to longer computations and worse approximation. Hence, it is desirable to find the family that it is the best in some reasonable sense. In this paper, we show how the problems of choosing the optimal family of sets can be formalized and solved.

As a result of the described general methodology, for astronomical images, we get exactly the geometric shapes that have been empirically used by astronomers and astrophysicists (thus, we have a theoretical explanation for these shapes).

File in LaTeX, in Compressed PostScript and in pdf.


Technical Report UTEP-CS-96-39, September 1996
Updated version UTEP-CS-96-39a, October 1996
Astrogeometry, Error Estimation,
and Other Applications of Set-Valued Analysis
Andrei Finkelstein, Olga Kosheleva, and Vladik Kreinovich

Published in ACM SIGNUM Newsletter, 1996, Vol. 31, No. 4, pp. 3-25.

In many real-life application problems, we are interested in numbers, namely, in the numerical values of the physical quantities. There are, however, at least two classes of problems, in which we are actually interested in sets:

In both cases, we need to process sets. To define a generic set, we need infinitely many parameters; therefore, if we want to represent and process sets in the computer, we must restrict ourselves to finite-parametric families of sets that will be used to approximate the desired sets. The wrong choice of a family can lead to longer computations and worse approximation. Hence, it is desirable to find the family that it is the best in some reasonable sense.

A similar problem occurs for random sets. To define a generic set, we need infinitely many parameters; as a result, traditional (finite-parametric) statistical methods are often not easily applicable to random sets. To avoid this difficulty, several researchers (including U. Grenander) have suggested to approximate arbitrary sets by sets from a certain finite-parametric family. As soon as we fix this family, we can use methods of traditional statistics. Here, a similar problem appears: a wrong choice of an approximation family can lead to a bad approximation and/or long computations; so, which family should we choose?

In this paper, we show, on several application examples, how the problems of choosing the optimal family of sets can be formalized and solved. As a result of the described general methodology:

Original file in LaTeX, uses ima.sty, ima10.sty, epsf.sty, and amssym.def.
Updated version in LaTeX;
in Compressed PostScript and in pdf.


Technical Report UTEP-CS-96-38, August 1996
The Worse, the Better:
A Survey of Paradoxical Computational Complexity
of Interval Computations
Slava Nesterov and Vladik Kreinovich

Published in Maricia A. Campos (ed.),
Abstracts of the II Workshop on Computer Arithmetic,
Interval and Symbolic Computation (WAI'96),
Recife, Pernambuco, Brazil, August 7-8, 1996, pp. 61A-63A.

File in plain TeX.


Technical Report UTEP-CS-96-37, August 1996
Optimal Interval Computation Techniques:
Optimization of Numerical Methods
in Case of Uncertainty
Vladik Kreinovich and Raul Trejo

Published in Maricia A. Campos (ed.),
Abstracts of the II Workshop on Computer Arithmetic,
Interval and Symbolic Computation (WAI'96),
Recife, Pernambuco, Brazil, August 7-8, 1996, pp. 48-50.

File in plain TeX and in pdf


Technical Report UTEP-CS-96-36, September 1996
How to Define an Average of Several Sets?
Vladik Kreinovich and Ilya Molchanov

Published in Geombinatorics, 1998, Part I, Vol. 7, No. 4, pp. 123-131; Part II, 1998, Vol. 8, No. 1, pp. 160-165.

When we have several images X1,...,Xn, it is desirable to define an "average", "typical" image X: e.g.,

There exist several methods of defining the "average" set. The simplest definitions are only applicable in certain cases, and the more complicated and more adequate ones face two problems:

In this paper, we show that these two problems are caused not by the drawbacks of the particular definitions, but that these problems are inherent to set averaging.

These results still leave the open problem of defining "the" average set.

File in plain TeX.


Technical Report No. UTEP-CS-96-35, September 1996
Using Robust Optimization
to Play against an Imperfect Opponent
Ronald R. Yager and Vladik Kreinovich

Published in Soft Computing, 1997, Vol. 1, No. 2, pp. 69-80.

Traditional game theory is based on the assumption that the opponent is a perfect reasoner and all payoff information is available. Based on this assumption, game theory recommends to estimate the quality of each possible strategy by its worst possible consequences. In real-life, opponents are often not perfect and payoff information is often not exact. If the only disadvantage of some action is that an unusually clever opponent can find a complicated way to counter it, then this action may be a perfect recommendation for a play against a normal (not unusually clever) opponent.

In other words, to estimate the quality of each move, instead of a normal minimum of possible consequences, we must consider the robust minimum that takes into consideration the fact that some of the consequences will never occur to the normal opponent.

We show that in a reasonable statistical setting, this idea leads to the class of OWA operators.

It turns out that playing against an imperfect opponent is not only a more realistic strategy, it is also often a simpler one: e.g., for the simplest game for which playing against a perfect opponent is computationally intractable (NP-hard), playing against an imperfect opponent is computationally feasible.

File in LaTeX.


Technical Report No. UTEP-CS-96-34, September 1996
Signal Design for Radar Imaging in Radar Astronomy:
Genetic Optimization
Benjamin C. Flores, Vladik Kreinovich, and Roberto Vasquez

Published in: Dipankar Dasgupta and Zbigniew Michalewicz (eds.), Evolutionary Algorithms in Engineering Applications, Springer-Verlag, Berlin-Heidelberg, 1997, pp. 406-423.

Radar imaging is an advanced remote sensing technique that maps the reflectivity of distant objects by transmitting modulated signals at radio frequencies and processing the detected echoes. By proper waveform selection, it is currently possible to image the surface of planets or asteroids from Earth with a relatively high degree of resolution, despite the astronomical distances to these objects. Waveforms that are used for radar astronomy are characterized by a large spectral bandwidth and long time duration, which can be obtained by phase modulating a long pulse train. An example of phase modulation is binary phase coding in which each pulse of the train is randomly assigned a phase of 0 or pi. The corresponding echo pulses are correlated with the sequence of transmitted pulses to resolve the main features of an object. The correlation of long binary sequences can yield high resolution with significant clutter suppression. However, this process requires the selection of an optimum binary phase code among a significant number of possibilities. For a given code of length N, where N is the number of code elements, the population size is 2^N. Certain members of this population will exhibit good qualities for imaging, while others may behave poorly. In this chapter, we discuss the principles of radar imaging and the implementation of a genetic algorithm to find the "good" member codes from a population of codes of length N. This algorithm was successfully tested for binary phase codes of lengths N=13 and 24 pulses. For the short length N=13 codes, for which an optimum code exists and is already known, the rate of convergence to this optimum was poor relative to that of an exhaustive search. However, for the longer codes tested (N=24), the genetic algorithm converged much faster than an exhaustive search for the large solution space.

File in LaTeX, uses epsf.tex, f1.eps, f2.eps, f3.eps, f4.eps, f5.eps, f6.eps.


Technical Report UTEP-CS-96-33, August 1996
Updated version UTEP-CS-96-33b, March 1997
Random Sets Unify, Explain, and Aid Known
Uncertainty Methods in Expert Systems
Vladik Kreinovich

Published in John Goutsias, Ronald P.S. Mahler, and Hung T. Nguyen (eds.), Random Sets: Theory and Applications, Springer-Verlag, 1997, pp. 321-345.

Numerous formalisms have been proposed for representing and processing uncertainty in expert systems. Several of these formalisms are somewhat ad hoc, in the sense that some of their formulas seem to have been chosen rather arbitrarily.

In this paper, we show that random sets provide a natural general framework for describing uncertainty, a framework in which many existing formalisms appear as particular cases. This interpretation of known formalisms (e.g., of fuzzy logic) in terms of random sets enables us to justify many "ad hoc" formulas. In some cases, when several alternative formulas have been proposed, random sets help to choose the best ones (in some reasonable sense).

One of the main objectives of expert systems is not only to describe the current state of the world, but also to provide us with reasonable actions. The simplest case is when we have the exact objective function. In this case, random sets can help in choosing the proper method of "fuzzy optimization".

As a test case, we describe the problem of choosing the best tests in technical diagnostics. For this problem, feasible algorithms are possible.

In many real-life situations, instead of an exact objective function, we have several participants with different objective functions, and we must somehow reconcile their (often conflicting) interests. Sometimes, standard approaches of game theory are not working. We show that in such situations, random sets present a working alternative. This is one of the cases when particular cases of random sets (such as fuzzy sets) are not sufficient, and general random sets are needed.

Original file in LaTeX;
Updated version in LaTeX and in Compressed Postscript.
Both LaTeX files use ima.sty, ima10.sty, epsf.sty, and amssym.def;


Technical Report UTEP-CS-96-32, August 1996
Why is the Asymptotic Time Complexity
of Algorithms Often n^alpha or n^alpha*ln(n)?
Olga Kosheleva and Vladik Kreinovich

File in LaTeX.


Technical Report UTEP-CS-96-31, August 1996
Towards a More Realistic Definition of Feasibility
Douglas Schirmer and Vladik Kreinovich

Published in Bulletin of the European Association for Theoretical Computer Science (EATCS), 1996, Vol. 90, pp. 151-153.

File in LaTeX.


Technical Report UTEP-CS-96-30, August 1996
Geometry of Errors Revisited:
A Natural Way to
Vector-Valued Metric Spaces
Gerhard Heindl, Vladik Kreinovich, and Bernadette Bouchon-Meunier

Published in Geombinatorics, 1997, Vol. 7, No. 2, pp. 47-60.

As a result of the measurements, we rarely (if ever) get precise values of the measured quantities y1,...,yn; most usually, we get a set Y of the possible values of y=(y_1,...,yn).

It is desirable to supply the user with a representative element of this set, and with an estimate of how different this representative can be from the (unknown) actual values. This "deviation" estimate d(y)=sup{d(y,y'): y' in Y} characterizes the accuracy of the measurements; it is therefore natural to choose as a representative the value for which this "deviation" estimate d(y) is the smaller possible.

In many reasonable cases, it is desirable to choose an l^p-metric, as a description of this "deviation". For finite p, this metric is strongly convex, and therefore, for convex sets Y, the requirement that the deviation d(y) be the smallest possible determines the representative uniquely. However, for p = infinity, uniqueness is lost: e.g., for a rectangle, in addition to the desired center, we get many other points y that are intuitively worse than the center, but for which the deviation d(y) is exactly the same as for the center.

To avoid this difficulty, we introduce a vector-valued metric as a natural limit of the l^p-geometries, and we show that for this limit metric, the representative element is chosen uniquely.

File in Plain TeX.


Technical Report UTEP-CS-96-29, July 1996
Short version UTEP-CS-96-29a, July 1996
Granularity via Non-Deterministic Computations:
What We Gain and What We Lose
Vladik Kreinovich and Bernadette Bouchon-Meunier

Short version published in BUlletin for Studies and Exchanges on Fuzziness and its AppLications (BUSEFAL), No. 68, October 1996, pp. 20-24.

Full paper published in International Journal of Intelligent Systems, 1997, Vol. 12, pp. 469-481.

We humans usually think in words; to represent our opinion about, e.g., the size of an object, it is sufficient to pick one of the few (say, five) words used to describe size ("tiny", "small", "medium", etc.). Indicating which of 5 words we have chosen takes 3 bits. However, in the modern computer representations of uncertainty, real numbers are used to represent this "fuzziness". A real number takes 10 times more memory to store, and therefore, processing a real number takes 10 times longer than it should. Therefore, for the computers to reach the ability of a human brain, Zadeh proposed to represent and process uncertainty in the computer by storing and processing the very words that humans use, without translating them into real numbers (he called this idea granularity).

If we try to define operations with words, we run into the following problem: e.g., if we define "tiny" + "tiny" as "tiny", then we will have to make a counter-intuitive conclusion that the sum of any number of tiny objects is also tiny. If we define "tiny" + "tiny" as "small", we may be overestimating the size. To overcome this problem, we suggest to use non-deterministic (probabilistic) operations with words. For example, in the above case, "tiny" + "tiny" is, with some probability, equal to "tiny", and with some other probability, equal to "small".

We also analyze the advantages and disadvantages of this approach:

File in LaTeX;
short version in LaTeX.


Technical Report UTEP-CS-96-28, July 1996
Fuzzy Modus Ponens as a Calculus of Logical Modifiers:
Towards Zadeh's Vision of Implication Calculus
Bernadette Bouchon-Meunier and Vladik Kreinovich

Published in Information Sciences, 1999, Vol. 116, pp. 219-227.

If we know that an implication A --> B is to some extent true, and if we know that a statement A is true "to some extent", i.e., if we know that the statement m(A) is true, for some logical modifier m ("approximately", "slightly", etc.), then we can conclude that B is also true to some extent, i.e., that m'(B) is true for some logical modifier m'.

Thus, every fuzzy implication defines a mapping from modifiers to modifiers. This mapping describes the meaning of the fuzzy implication, so, it is desirable to find out what mapping are possible. This desire is in line with Zadeh's suggestion that the future of fuzzy logic lies with treating fuzzy implications as a calculus, rather than as analytical formulas.

In this paper, we formally define implication as a mapping from modifiers to modifiers that satisfy some reasonable properties. For an important particular case of invertible mappings, we get a complete description of all modifier-to-modifier mappings that characterize fuzzy implication.

Our main mathematical result can be also used in the foundations of knowledge elicitation.

File in LaTeX.


Technical Report UTEP-CS-96-27, July 1996
Updated version UTEP-CS-96-27a, November 1996
Updated version UTEP-CS-96-27b, January 1997
Updated version UTEP-CS-96-27c, August 2001
In Case of Interval (or More General) Uncertainty,
No Algorithm Can Choose the Simplest Representative
Gerhard Heindl, Vladik Kreinovich, and Maria Rifqi

Published in Reliable Computing, 2002, Vol. 8, No. 3, pp. 213-227.

When we only know the interval of possible values of a certain quantity (or a more general set of possible values), it is desirable to characterize this interval by supplying the user with the "simplest" element from this interval, and by characterizing how different from this value we can get. For example, if, for some unknown physical quantity x, measurements result in the interval [1.95,2.1] of possible values, then, most probably, the physicist will publish this result as y ~ 2. Similarly, a natural representation of the measurement result x in [3.141592,3.141593] is x ~ pi.

In this paper, we show that the problem of choosing the simplest element from a given interval (or from a given set) is, in general, not algorithmically solvable.

Original file in LaTeX;
Updated versions a and b, in LaTeX;
latest version in compressed Postscript and in pdf.


Technical Report UTEP-CS-96-26, July 1996
Propositional Fuzzy Logics:
Decidable for Some (Algebraic) Operators,
Undecidable for More Complicated Ones
Mai Gehrke, Vladik Kreinovich, and
Bernadette Bouchon-Meunier

Published in International Journal of Intelligent Systems, 1999, Vol. 14, No. 9, pp. 935-947.

If we view fuzzy logic as a logic, i.e., as a particular case of a multi-valued logic, then one of the most natural questions to ask is whether the corresponding propositional logic is decidable, i.e., does there exist an algorithm that, given two propositional formulas F and G, decides whether these two formulas always have the same truth value. It is known that the simplest fuzzy logic, in which & = min and \/ = max, is decidable. In this paper, we prove a more general result: that all propositional fuzzy logics with algebraic operations are decidable, We also show that this result cannot be generalized further: e.g., no deciding algorithm is possible for logics in which operations are algebraic with constructive (non-algebraic) coefficients.

File in pdf


Technical Report UTEP-CS-96-25, June 1996
S. Maslov's Iterative Method: 15 Years Later
(Freedom of Choice, Neural Networks,
Numerical Optimization, Uncertainty Reasoning, and
Chemical Computing)
Vladik Kreinovich

Published in Problems of Reducing the Exhaustive Search,
American Math. Soc., Providence, RI, 1997, pp. 175-189.

Preliminary version published as Universit\'e Paris VI et VII, Institut Blaise Pascal, Laboratoire Formes et Intelligence Artificielle LAFORIA, Technical Report 96/23, September 1996.

In 1981, S. Maslov has proposed a new iterative method for solving propositional satisfiability problems. The 1981-87 results related to this method were described in the present book. In this chapter, we briefly recall the origins of Maslov's method, and describe further results and ideas related to this method.

File in plain TeX, compressed PostScript, and pdf.


Technical Report UTEP-CS-96-24, June 1996
Updated version UTEP-CS-96-24a, April 1998
From Ordered Beliefs to Numbers:
How to Elicit Numbers Without Asking for Them
(Doable but Computationally Difficult)
Brian Cloteaux, Christophe Eick, Bernadette Bouchon-Meunier,
and Vladik Kreinovich

Published in International Journal of Intelligent Systems, 1998, Vol. 13, No. 9, pp. 801-820;
preliminary version published as Universite Paris VI et VII, Institut Blaise Pascal, Laboratoire Formes et Intelligence Artificielle LAFORIA, Technical Report 96/20, June 1996.

One of the most important parts of designing an expert system is elicitation of the expert's knowledge. This knowledge usually consists of facts and rules. Eliciting these rules and facts is relatively easy, the more complicated task is assigning weights (numerical or interval-valued degrees of belief) to different statements from the knowledge base. Expert often cannot quantify their degrees of belief, but they can order them (by suggesting which statements are more reliable). It is, therefore, reasonable to try to reconstruct the degrees of belief from such an ordering.

In this paper, we analyze when such a reconstruction is possible, whether it lead to unique values of degrees of belief, and how computationally complicated the corresponding reconstruction problem can be.

file in pdf


Technical Report UTEP-CS-96-23, June 1996
Axiomatic Description of Implication Leads to a
Classical Formula with Logical Modifiers:
(In Particular,
Mamdani's Choice of "And" as Implication
is Not So Weird After All)
Bernadette Bouchon-Meunier and Vladik Kreinovich

There are many ways to define fuzzy logic, i.e., to extend logical operations from the set {0,1} of truth values of classical logic to the set [0,1] of truth values of fuzzy logic in such a way that natural properties of these logical operations are still true. For "and", "or", and "not", we get quite reasonable results. However, for implication, the extensions that are obtained in this manner do not usually include Mamdani's choice of "and" as implication, the choice that forms the basis of fuzzy control - the most successful application of fuzzy logic.

In the present paper, we develop a new set of reasonable properties that leads to a new description of fuzzy implication. We get a general description of all operations that satisfy these properties.

This description includes Mamdani's operation as a natural particular case. Thus, Mamdani's choice of "and" as implication is not so strange after all.

File in LaTeX.


Technical Report UTEP-CS-96-22, June 1996
Updated version UTEP-CS-96-22a, January 1997
On a Theoretical Justification of the Choice of
Epsilon-Inflation in Pascal-XSC
Vladik Kreinovich, Guenter Mayer, and Scott Starks

Published in Reliable Computing, 1997, Vol. 3, No. 4, pp. 437-452.

In many interval computation methods, if we cannot guarantee a solution within a given interval, it often makes sense to "inflate" this interval a little bit. There exist many different "inflation" methods. The authors of PASCAL-XSC, after empirically comparing the behavior of different inflation methods, decided to implement the formula [x-,x+] --> [(1+e)x- - e*x+, (1+e)x+ + e*x-]. A natural question is: Is this choice really optimal (in some reasonable sense), or is it only an empirical approximation to the truly optimal choice?

In this paper, we show that this empirical choice can be theoretically justified. Namely, we will give two justifications:

Original file in LaTeX
updated version in LaTeX, compressed PostScript, and pdf.


Technical Report UTEP-CS-96-21, May 1996
Kolmogorov's Theorem
and its Impact on Soft Computing
Hung T. Nguyen and Vladik Kreinovich

In: Ronald R. Yager and Janusz Kacprzyk, The Ordered Weighted Averaging Operators: Theory and Applications, Kluwer, Boston, MA, 1997, pp. 3-17.

In this chapter, we describe various applications of the Kolmogorov's theorem on representing continuous functions of several variables (as superpositions of functions of one and two variables) to soft computing. Kolmogorov's theorem leads to a theoretical justification, as well as to design methodologies, for neural networks. In the design of intelligent systems, Kolmogorov's theorem is used to show that general logical operators can be expressed in terms of basic fuzzy logic operations.

In the area of reliable computing (i.e., computing that takes into consideration the accuracy of the input data), an extended version of Kolmogorov's theorem justifies the need to use operations with three or more operands in soft computing. Such operations have already been actively used in soft computing; the simplest (and, so far, most used) of such operations are ordered weighted averaging (OWA) operators proposed by R. R. Yager.

File in Plain TeX.


Technical Report UTEP-CS-96-20, May 1996
Pure Quantum States are Fundamental,
Mixtures (Composite States) are Mathematical Constructions:
An Argument Using Algorithmic Information Theory
Vladik Kreinovich and Luc Longpre

Published in International Journal of Theoretical Physics, 1997, Vol. 36, No. 1, pp. 167-176.

From the philosophical viewpoint, two interpretations of the quantum measurement process are possible: According to the first interpretation, when we measure an observable, the measured system moves into one of the eigenstates of this observable ("the wave function collapses"); in other words, the Universe "branches" by itself, due to the very measurement procedure, even if we do not use the result of the measurement. According to the second interpretation, the system simply moves into a mixture of eigenstates, and the actual "branching" occurs only when an observer reads the measurement results. According to the first interpretation, a mixture is a purely mathematical construction, and in the real physical world, a mixture actually means that the system is in one of the "component" states.

In this paper, we analyze this difference from the viewpoint of the algorithmic information theory; as a result of this analysis, we argue that only pure quantum states are fundamental, while mixtures are simply useful mathematical constructions.

File in LaTeX.


Technical Report UTEP-CS-96-19, April 1996
Randomness as Incompressibility: A Non-Algorithmic Analogue
Vladik Kreinovich and Luc Longpre

According to the original Kolmogorov-Martin-Lof (KML) idea, an infinite sequence w is random (does not belong to any constructive set of measure 0) iff it cannot be compressed, i.e., if for every n, the length of the shortest possible compression (called Kolmogorov complexity) of its n-fragment wn is close to n. The resulting definition seems purely algorithmic, because if we consider arbitrary (not necessarily algorithmic) compression schemes, then we can always compress every fragment wn of a given sequence into a binary code of n. In this paper, however, we introduce a uniform non-effective compressibility notion and show that Kolmogorov's idea of randomness as incompressibility is applicable to classical (non-algorithmic) measure as well: a set is of measure 0 iff there exists a coding procedure that compresses all elements of this set.

File in LaTeX.


Technical Report UTEP-CS-96-18, April 1996
Astrogeometry: Geometry Explains Shapes of Celestial Bodies
Andrei Finkelstein, Olga Kosheleva, and Vladik Kreinovich

Published in Geombinatorics, 1997, Vol. VI, No. 4, pp. 125-139.

Celestial bodies such as galaxies, stellar clusters, planetary systems, etc., have different geometric shapes (e.g., galaxies can be spiral or circular, etc.). Usually, complicated physical theories are used to explain these shapes; for example, several dozen different theories explain why many galaxies are of spiral shape; see, e.g., (Toomre 1973, Strom 1979, Vorontsov-Veliaminov 1987, Binney 1989). Some rare shapes are still unexplained.

In the present paper, we show that to explain these "astroshapes", we do not need to know the details of physical equations: practically all the shapes of celestial bodies can be explained by simple geometric invariance properties. This fact explains, e.g., why so many different physical theories lead to the same spiral galaxy shape.

File in plain TeX.


Technical Report UTEP-CS-96-17, April 1996
Earthquakes and Geombinatorics
Diane Doser, Mohamed Amine Khamsi, and Vladik Kreinovich

Published in Geombinatorics, 1996, Vol. VI, No. 2, pp. 48-54.

According to modern geophysics, earthquakes mainly occur in the places where tectonic plates interact.

Plate tectonics started with analyzing the simplest plate interactions: heads on collisions and pull apart motions; such interactions are the most common. Corresponding interaction zones are often very seismically active; in addition to frequent small and medium earthquakes, they host the most destructive earthquakes.

Not so common are oblique plate collisions (in which plates collide at an oblique angle) and lateral motions. Earthquakes caused by these non-canonical interactions are not so destructive, but they can be even more dangerous that heads-on ones, because in heads-on collisions, small earthquakes serve as a warning for a big one, while in oblique and lateral collisions zones, earthquakes can occur without warning.

To predict such un-warned earthquakes better, it is necessary to look for traces of such quakes in the geological past. This necessity raises an important question: currently, non-canonical collision zones are relatively rare. Are we sure that they occured in the past at all? Maybe, our search for such past quakes is futile?

To answer this question, in the present paper, we reformulate it in topological and geometric terms. Our answer: it is impossible to have only heads-on and pull-apart collisions, so there must have been oblique and/or lateral collisions in each past geological epoch.

File in plain TeX.


Technical Report UTEP-CS-96-16, April 1996
Zeros of Riemann's Zeta Function are Uniformly Distributed, but Not Random: An Answer to Calude's Open Problem
Luc Longpre and Vladik Kreinovich

Published in the Bulletin of the European Association for Theoretical Computer Science (EATCS), 1996, Vol. 59, pp. 163-164.

It is known that the imaginary parts of the roots of the Riemann's zeta-function are uniformly distributed. This fact led Calude to a natural question: is the corresponding sequence (of binary expansions merged together) random in the sense of Martin-Lof? Our answer is negative: this sequence is not random.

File in LaTeX and in pdf


Technical Report UTEP-CS-96-15, April 1996
Was Stonehenge an Observatory?
Gilbert Castillo and Vladik Kreinovich

Published in Geombinatorics, 1998, Vol. 7, No. 3, pp. 83-87.

In 1965, researchers found that the direction formed by two stones from Stonehenge, a mysterious ancient structure, pointed exactly to the direction of sunrise at solstice. Several similar coincidences were discovered, which lead many people to believe that Stonehenge was an ancient observatory. In this paper, we analyze this problem from a geometric viewpoint and show that similar coincidences can happen (with a probability) for randomly placed stones. Thus, Stonehenge's coincidences cannot be viewed as proof that this place was an observatory.

File in plain TeX.


Technical Report UTEP-CS-96-14, April 1996
Simulating Fuzzy Control as a New Method of Eliciting Membership Functions
Bernadette Bouchon-Meunier and Vladik Kreinovich

Published in the Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU'96), Granada, Spain, July 1-5, 1996, Vol. 2, pp. 1043-1048.

According to traditional fuzzy control methodology, an expert controller formulates his or her control rules using terms from natural language, and then, we encounter the task of selecting membership functions and and-or operations. This is a very difficult task because:

As a result, usually, time-consuming tuning is applied to improve the quality of the resulting fuzzy control.

In this paper, we propose a new method of eliciting membership functions and "and" and "or" operations that does not depend on the experts' ability to quantize their behavior. Namely, we use the expert's natural language terms to formulate rules different from the one that the expert has actually used, ask the experts to simulate control that follows the new set of rules, and record the resulting control. We show that from these records, we can uniquely reconstruct new method of eliciting membership functions and "and" and "or" operations.

File in LaTeX, uses proceedingsIPMU96.sty.


Technical Report UTEP-CS-96-13, April 1996
Expanded version UTEP-CS-96-13a, September 1996
Final version UTEP-CS-96-13b, January 1997
Fuzzy Numbers are the Only Fuzzy Sets That Keep Invertible Operations Invertible
B. Bouchon-Meunier, O. Kosheleva, V. Kreinovich, and H. T. Nguyen

Sort version published in the Proceedings of the International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems (IPMU'96), Granada, Spain, July 1-5, 1996, Vol. 2, pp. 1049-1054.

Full paper published in Fuzzy Sets and Systems, 1997, Vol. 91, No. 2, pp. 155-163.

In standard arithmetic, if we, e.g., accidentally add a wrong number y to the preliminary result x, we can undo this operation by subtracting y from the result x+y. In this paper, we prove the following two results:

Thus, invertibility requirement leads to a new characterization of the class of all fuzzy numbers.

The original version: File in LaTeX, uses proceedingsIPMU96.sty and amssymbols.sty.
The expanded versions a and in LaTeX, use amssymbols.sty;
final version in pdf


Technical Report UTEP-CS-96-12, April 1996
Space-Time is "Square Times" More Difficult to Approximate than Euclidean Space
Vladik Kreinovich

Published in Geombinatorics, 1996, Vol. 6, No. 1, pp. 19-29.

How many points do we need to approximate a given metric space S (e.g., a ball in the Euclidean space) with a given accuracy e>0? To be more precise, how many points do we need to reproduce the metric d(X,Y) on S with an accuracy e? This problem is known to be equivalent to the following geombinatoric problem: find the smallest number of balls of given radius e that cover a given set S.

A similar approximation problem is also important for space-times. In this case, instead of a regular metric d(X,Y) that describes distance between points X and Y, we have a kinematic metric t(X,Y) that describes the proper time between events X and Y. It turns out, rather surprisingly, that this space-time analogue of the above geombinatoric problem require much more points to approximate: e.g., to approximate a compact set in a 4-D Euclidean space, we need ~ 1/e^4 points, while to approximate a similar compact in a 4-D space-time, we need ~ 1/e^8 points, approximately the square of the previous number.

File in plain TeX.


Technical Report UTEP-CS-96-11, March 1996
On the Lattice Extensions of Partial Orders of Rings
Piotr J. Wojciechowski and Vladik Kreinovich

Published in Communications in Algebra, 1997, Vol. 25, No. 3, pp. 935-941.

We propose the study of rings, every partial ordering of which extends to a lattice ordering (L*-rings). We show techniques enabling us, in some important cases, to decide whether a ring is or is not an L*-ring. In particular, we show that direct products of rings without nilpotent elements, rings of matrices and polynomial rings are not L*-rings. On the other hand, we give an example of a (non-associative) ring every direct partial order of which extends to a lattice order but fails to extend to a total order (not O*-ring). We also prove that under the requirement that the orders in question are consistent (the definition of consistency is given in the text) or that squares of elements are positive, an L*-ring can have only algebraic elements. Finally, we generalize the main result of (V. Kreinovich, "If a polynomial identity guarantees that every partial order on a ring can be extended, then this identity is true only for a zero-ring", Algebra Universalis, 1995, Vol. 33, No. 2, pp. 237-242) to L*-rings having positive squares.

File in LaTeX, uses tex-set.tex.


Technical Report UTEP-CS-96-10, March 1996
Updated version UTEP-CS-96-10a, January 1997
Updated version UTEP-CS-96-10b, March 1997
Updated version UTEP-CS-96-10c, May 1997
Using Gelfond-Przymusinska's Epistemic Specifications to Justify (Some) Heuristic Methods Used in Expert Systems and Intelligent Control
Hung T. Nguyen and Vladik Kreinovich

Published in Soft Computing, 1997, Vol. 1, No. 4, pp. 198--209.

We show that epistemic specifications, proposed by M. Gelfond and H. Przymusinska in their 1991 paper, lead to a natural justification of several heuristic methods used in expert systems and in intelligent control.

This paper is an overview of several results, some of which have already been published before. In contrast to the original publications, that were targeted specifically towards researchers that design and analyze traditional (degree of certainty, MYCIN-type) expert systems and intelligent control systems, we present the corresponding results in such a way that they become completely understandable (and hopefully, natural) to the logic programming community as well.

The relationship between epistemic specifications and heuristic methods not only makes heuristic methods more reliable, it also adds confidence that the formalism of epistemic specifications, as proposed by M. Gelfond and H. Przymusinska, is indeed an appropriate formalization of knowledge: Indeed, the fact that the language of epistemic specifications, that was originally proposed without any numerical degree of certainty in mind, turns out to be quite consistent with (practically successful) number-valued heuristics, is a strong argument in favor of this language.

Original file; first updated file; second updated file, and final file, all in LaTeX.


Technical Report UTEP-CS-96-9, March 1996
Fuzzy Logic as Applied Linear Logic
Vladik Kreinovich, Hung Nguyen, and Piotr Wojciechowski

Published in the Bulletin for Studies and Exchanges on Fuzziness and its Applications (BUSEFAL), 1996, No. 67, pp. 4-13.

In common sense reasoning, logical connectives such as "and" have two different meanings. For example, if A stands for "I have a dollar", B for "I can buy a can of coke", C for "I can buy a cookie", and let coke and cookie cost $1 each. Then, A --> B and A --> C. In 2-valued logic, we can conclude that A --> (B & C), but, since our resources are limited, with $1, we cannot buy both a coke and a cookie. We need two dollars to buy both, which can be expressed as (A & A) --> (B & C). (Here, clearly, A & A =/= A.)

To formalize these two meaning, a new logic with several connectives for "and", "or", etc., was proposed in 1987 called linear logic. In this paper, we show that from the algebraic viewpoint, fuzzy logic can be viewed as an important particular case of linear logic. For this particular case, we find the explicit expressions for new logical operations proposed by linear logic.

File in LaTeX.


Technical Report UTEP-CS-96-8, February 1996
Updated version UTEP-CS-96-8a, August 1996
Normal Forms for Fuzzy Logic - an Application of Kolmogorov's Theorem
Vladik Kreinovich, Hung T. Nguyen, and David A. Sprecher

Published in the International Journal of Uncertainty, Fuzziness, and Knowledge-Based Systems, 1996, Vol. 4, No. 4, pp. 331-349.

This paper addresses mathematical aspects of fuzzy logic. The main results obtained in this paper are:

Original file in LaTeX.
Updated version in LaTeX.


Technical Report UTEP-CS-96-7, January 1996
Updated version UTEP-CS-96-7a, March 1997
Updated version UTEP-CS-96-7b, March 1998
On the Possibility of Using Complex Values in Fuzzy Logic for Representing Inconsistencies
Hung T. Nguyen, Vladik Kreinovich, and Valery Shekhter

Published in International Journal of Intelligent Systems, 1998, Vol. 13, No. 8, pp. 683-714.

In science and engineering, there are "paradoxical" cases when we have some arguments in favor of some statement A (so, the degree to which A is known to be true is positive (non-zero)), and we also have some arguments in favor of its negation -A, and we do not have enough information to tell which of these two statements is correct. Traditional fuzzy logic, in which "truth values" are described by numbers from the interval [0,1], easily describes such "paradoxical" situations: the degree a to which the statement A is true, and the degree 1-a to which its negation -A is true can be both positive. In this case, if we use traditional fuzzy &-operations (min or product), the degree of belief a & (1-a) in A & -A is positive, indicating that there is some degree of inconsistency in the initial beliefs.

When we try to use fuzzy logic to formalize expert reasoning in humanities, we encounter the problem that in humanities, in addition to the above-described paradoxical situations caused by the incompleteness of our knowledge, there are also true paradoxes, i.e., statements that are perceived as true and false at the same time. For such statements, A & -A = "true". The corresponding equality a & (1-a) = 1 is impossible in traditional fuzzy logic (where a & (1-a) is always <= 0.5), so, in order to formalize such true paradoxes, we must extend the set of degrees of belief from the interval [0,1].

In this paper, we show that such an extension can be achieved if we allow truth values to be complex numbers.

Original file in LaTeX, uses IJA.sty;
Updated version in LaTeX;
Final version in LaTeX.


Technical Report UTEP-CS-96-6, January 1996
Relating Fuzzy Models, Theories of Action, and Reactive Robot Control
Chitta Baral, Hung T. Nguyen, and Vladik Kreinovich

In addition to fuzzy control rules, we often have fuzzy rules that describe how a system reacts to a control (i.e., a fuzzy model of the system). Traditionally, with a few exceptions (such as Sugeno's helicopter control) control rules are designed without using the fuzzy model. One reason for that is that the problem of finding control rules that are adequate for a given rule-based model (fuzzy or crisp) is computationally intractable. We propose to use fuzzy model for checking whether given control rules are adequate. If it is not, we will explain to the experts what was wrong with their previous set of rules, and we ask the experts to modify their rules. As a result, we will hopefully end up with an adequate set of control rules.

File in LaTeX.


Updated version UTEP-CS-96-5a, August 1996
Updated version UTEP-CS-96-5b, September 1996
Acausal Processes and Astrophysics:
Case When Uncertainty is Non-Statistical
Vladimir Dimitrov, Misha Koshelev, and Vladik Kreinovich

Published in BUlletin for Studies and Exchanges on Fuzziness and its AppLications (BUSEFAL), No. 69, January 1997, pp. 183-191.

In Newtonian physics, if we know the state of the world at some moment of time, then we can precisely predict the state of the world at all future times. In this sense, Newtonian physics is deterministic. In modern physics (starting with quantum mechanics), theories are usually non-deterministic in the sense that even if we know exactly the initial state of the world, we cannot uniquely predict the future state of the world. In quantum mechanics (and in most modern quantum-based physical theories), the best we can get is probabilities of different future states. In this sense, in the majority of modern physical theories, uncertainty is of statistical nature.

Lately, a new area of acausal (causality violating) processes has entered mainstream physics. This area has important astrophysical applications. In this paper, we show that acausal processes lead to non-statistical uncertainty.

The main purpose of this paper is to inform specialists in uncertainty representation about this situation, with a hope that formalisms for describing non-statistical uncertainty that have been developed to represent uncertainty of human knowledge (such as fuzzy logic) will help in formalizing physical non-statistical uncertainty as well.

Files versions a and b in LaTeX.


Technical Report UTEP-CS-96-5, January 1996
Fuzzy Logic and Time Travel: Possible Use of Fuzzy Logic in Mainstream Astrophysics
Vladimir Dimitrov, Misha Koshelev, and Vladik Kreinovich

Fuzzy logic can be (and has been) successfully used to described reasoning in physics, but physical theories that result from this reasoning has so far been very mathematically precise (crisp). Lately, a new area of acausal (causality violating) processes has entered mainstream physics; this area has real paradoxes, and therefore, we believe that fuzzy logic can be used to formalize it. Possible astrophysical applications are outlined.

File in LaTeX.


Technical Report UTEP-CS-96-4, January 1996
Updated version UTEP-CS-96-4a, May 1996
On Re-Scaling in Fuzzy Control and Genetic Algorithms
Hung T. Nguyen and Vladik Kreinovich

Published in Proceedings of the 1996 IEEE International Conference on Fuzzy Systems, New Orleans, LA, September 8-11, Vol. 3, pp. 1677-1681

It is known that in many applications of fuzzy control or genetic algorithms, appropriate re-scaling of the control parameters can drastically improve the performance. Successful applications of re-scaling are normally based on fine-tuning experiments or on the good knowledge of the area.

These successes tempt to justify the use of re-scaling in situations when we do not have good knowledge and have not experimented much. Surprisingly, in such situations, re-scaling is often not only not improving the performance, but often degrades it.

In this paper, we provide a simple mathematical explanation of this phenomenon.

First draft: file in LaTeX.
Updated version file in LaTeX, uses style files 10pt.sty and latex8.sty.


Technical Report UTEP-CS-96-3, January 1996
Updated version UTEP-CS-96-3a, May 1996
Classical-Logic Analogue of a Fuzzy "Paradox"
Hung T. Nguyen and Vladik Kreinovich

Published in Proceedings of the 1996 IEEE International Conference on Fuzzy Systems, New Orleans, LA, September 8-11, Vol. 1, pp. 428-431

One of the main reasons why classical logic is not always the most adequate tool for describing human knowledge is that in many real-life situations, we have some arguments in favor of a certain statement A and some arguments in favor of its negation -A. As a result, we want to consider both A and -A to be (to some extent) true. Classical logic does not allow us to do that, while in fuzzy logic, if the degree of belief in A is different from 0 and 1, then we have a positive degree of belief in a statement A & -A.

In classical logic, A & -A is always false, so, when we get both A and -A in classical logic (a paradox), this means that something was wrong. In view of that, the fact that A & -A is possible in fuzzy logic, is viewed by some logicians as a "paradox" of fuzzy logic.

In this paper, we show that in classical logic, although we cannot directly have A & -A, we can, in some sense, have confidence in both A and "not A". In other words, we show that what classical logicians consider a fuzzy paradox has a direct analogue in classical logic.

First draft UTEP-CS-96-3 file in LaTeX.
Updated version UTEP-CS-96-3a file in LaTeX, uses style files 10pt.sty and latex8.sty,
and in pdf


Technical Report UTEP-CS-96-2, January 1996
"Interval Rational = Algebraic" Revisited: A More Computer Realistic Result
Anatoly V. Lakeyev and Vladik Kreinovich

Published in SIGNUM Newsletter, 1996, Vol. 31, No. 1, pp. 14-17.

In the paper "Interval rational = algebraic" (SIGNUM Newlsetter, 1995, Vol. 30, No. 4, pp. 2-13), it is shown that if we add "interval computations" operation to the list of arithmetic operations that define rational functions, then the resulting class of "interval-rational" functions practically coincides with the class of all algebraic functions. By "practically coincides", we mean that first, every interval-rational function is algebraic, and that second, for every algebraic function A(x), there exists an interval-rational function that coincides with A(x) for almost all x. In that paper, "almost all" was understood in terms of Lebesgue measure; this result is therefore, not very computer realistic, because all the numbers representable in a computer are rational and therefore, form a set of Lebesgue measure 0. In this article, we formulate a more computer-realistic version of the result that interval + rational = algebraic.

File in LaTeX.


Technical Report UTEP-CS-96-1, January 1996
Updated version UTEP-CS-96-1a, March 1996
Fuzzy Logic, Logic Programming, and Linear Logic: Towards a New Understanding of Common Sense
Hung T. Nguyen and Vladik Kreinovich

Published in the Proceedings of NAFIPS'96, Berkeley, CA, June 1996, pp. 546-550.

Fuzzy logic was originally proposed as a tool for describing human reasoning. Currently, the main area of applications of fuzzy logic is in fuzzy control, where the choice of a logic is usually motivated not by logical considerations (i.e., not by what best describes how people actually think), but by purely pragmatic, engineering considerations: what logic would lead to the best control.

In this paper, we try to return to the original meaning of fuzzy logic: a tool for describing human reasoning. We analyze why the existing formalisms are not always adequate, and describe possible modifications of fuzzy logic. Our analysis shows that there are deep similarities between the descriptions of common sense reasoning in three different fields: fuzzy logic, logic programming, and linear logic. Thus, the future formalism for describing human reasoning will probably be a synthesis of these three.

File in LaTeX, uses IEEEtran.sty.