Computer Science Department

Abstracts of 1999 Reports

Technical Report UTEP-CS-99-49, December 1999

Extracting Fuzzy Sparse Rule Base by Cartesian Representation and Clustering

Yeung Yam, Vladik Kreinovich, and Hung T. Nguyen

Sparse rule base and interpolation have been proposed as possible solution to alleviate the geometric complexity problem of large fuzzy set. However, no formal method to extract sparse rule base is yet available. This paper combines the recently introduced Cartesian representation of membership functions and a mountain method-based clustering technique for extraction. A case study is included to demonstrate the effectiveness of the approach.

Technical Report UTEP-CS-99-48a, October 2000

Aerospace Applications of Soft Computing and Interval Computations (with an Emphasis on Simulation and Modeling)

Scott A. Starks and Vladik Kreinovich

Published in
*Systems Analysis Modelling Simulation*, 2002, Vol. 42, No. 5,
pp. 713-734.

This paper presents a brief overview of our research in applications of soft computing and interval computations to aerospace problems, with a special emphasis on simulation and modeling.

Technical Report UTEP-CS-99-48, December 1999

Aerospace Applications of Soft Computing and Interval Computations (with an Emphasis on Multi-Spectral Satellite Imaging)

Scott A. Starks and Vladik Kreinovich

Published in: Mo Jamshidi, Madjid Fathi, and Tkeshi Furunashi (eds.),
*Soft Computing, Multimedia, and Image Processing.
Proceedings of the 2000 World Automation Congress WAC'2000,
Maui, Hawaii, June 11-16, 2000*, TSI Press, Albuquerque, 2000,
pp. 644-651.

This paper presents a brief overview of our research in applications of soft computing and interval computations to aerospace problems, with a special emphasis on multi-spectralsatellite imaging.

Technical Report UTEP-CS-99-47, November 1999

An Optimal FFT-Based Algorithm for Mosaicking Images, with Applications to Satellite Imaging and Web Search

Stephen Gibson, Olga Kosheleva, Luc Longpre, Brian Penn, and Scott A. Starks

Published in *Proceedings of the Joint Conferences on Informattion
Sciences*, Atlantic City, February 27-March 3, 2000, Vol. I,
pp. 248-251.

Digital data storage is becoming ever more abundant and cheap. This, along with other technological advances, has brought about an age of mass storage of information, much of it in the form of images. In order to be able to process these stockpiles of image data, new and faster computer algorithms are needed.

One area of interest is that of image mosaicking, i.e., comparing two overlapping images and finding the proper scaling, angle of rotation, and translation needed to fit one with the other. Early methods for mosaicking images included visual inspection or exhaustive, pixel by pixel, search for the best match. With such large quantities of images as we have today, and the increasing need for accuracy, these methods are no longer feasible. Several new mosaicking methods have been proposed based on Fast Fourier Transform (FFT).

The existing FFT-based algorithms do not always lead to reasonable mosaicking. In this paper, we formalize the corresponding expert rules and, as a result, design an optimal FFT-based mosaicking algorithm.

Technical Report UTEP-CS-99-46, November 1999

Towards Mathematical Foundations of Information Retrieval: Dependence of Website's Relevance on the Number of Occurrences of a Queried Word

Laszlo T. Koczy, Vladik Kreinovich, Yohans Mendoza, Hung T. Nguyen, and Harry Schulte

Published in *Proceedings of the Joint Conferences on Informattion
Sciences*, Atlantic City, February 27-March 3, 2000, Vol. I,
pp. 252-255.

In response to a query,
web search tools often return many websites which are not really
relevant. One reason for this is that the queried word may have several
meanings different to the one which the user has in mind.
To eliminate these undesirable meanings, it is reasonable to look for
occurrences not only of the queried word itself, but also for
other words related to this particular meaning, and then select only
the websites for which, based on this information, we are confident
about their relevance. For this strategy to work, we must be able to
estimate the degree of relevance *d* of a website
based on the number of occurrences *N* of given word.

In this paper, we
describe the optimal model for the dependence *d(N)*.

File in Compressed PostScript and in pdf.

Technical Report UTEP-CS-99-45, November 1999

Updated version UTEP-CS-99-45a, April 2002

Range Estimation is NP-Hard for Epsilon-Square Accuracy and Feasible for Epsilon to the Power of 2-Delta

Vladik Kreinovich

Published in
*Reliable Computing*, 2002, Vol. 8, No. 6, pp. 481-491.

The basic problem of interval computations is: given a function
f(x1,...,xn) and n intervals [xi-,xi+],
find the (interval) range Y of the given function on
the given intervals. It is known that even for quadratic polynomials
f(x1,...,xn), this problem is NP-hard. In this paper,
following the advice of A. Neumaier, we analyze the complexity of
*asymptotic* range estimation, when the bound "epsilon" on the
width of the input intervals tends to 0. We show that for small
c>0, if we want to
compute the range with an accuracy c times epsilon squared, then the
problem is still NP-hard; on the other hand, for every delta>0,
there exists a feasible algorithm which asymptotically, estimates the
range with an accuracy c times epsilon to the power 2-delta.

Original file in
Compressed PostScript;

updated file in Compressed PostScript and
in pdf.

Technical Report UTEP-CS-99-44, November 1999

Geombinatoric Aspects of Processing Large Images and Large Spatial Databases

Jan Beck, Vladik Kreinovich, and Brian Penn

Published in *Geombinatorics*, 2001, Vol. 11,
No. 4, pp. 6-12.

Computer processing can drastically improve the quality of an image and the reliability and accuracy of a spatial database. A large image (database) does not easily fit into the computer memory, so we process it by downloading pieces of the image. Each downloading takes a lot of time, so, to speed up the entire processing, we must use as few pieces as possible.

Many algorithms for processing images and spatial databases consist of comparing the value at a certain spatial location with values at nearby locations. For such algorithms, we must select (possibly overlapping) sub-images in such a way that for each point, its neighborhood (of given radius) belongs to a single sub-image. We reformulate the corresponding optimization problem in geometric terms, and use this reformulation to provide some information about the solution. Namely, for images, the optimal sub-images should be bounded by straight lines or circular arcs; for non-homogeneous spatial databases, we deduce an explicit expression for the curvature of the boundaries in terms of the data density in different points.

Technical Report UTEP-CS-99-43, November 1999

Updated version UTEP-CS-99-43d, January 2000

Which Sensor Set is Better for Monitoring Spacecraft Subsystems? A Geometric Answer and its Probabilistic Generalization

Matthew Barry and Vladik Kreinovich

Short versions published
in *Proceedings of the Joint Conferences on Informattion
Sciences*, Atlantic City, February 27-March 3, 2000, Vol. I,
pp. 244-247, and in
Mo Jamshidi, Madjid Fathi, and Takeshi Furunashi (eds.),
*Soft Computing, Multimedia, and Image Processing.
Proceedings of the 2000 World Automation Congress WAC'2000,
Maui, Hawaii, June 11-16, 2000*, TSI Press, Albuquerque, 2000,
pp. 652-658. Full paper appeared in
*Geombinatorics*, 2001, Vol. X,
No. 3, pp. 96-102.

Each Space Shuttle mission produces more than 25,000 real-time measurements in NASA's mission control center. Within the mission control center, dozens of computer programs analyze these measurements and present results to mission control personnel. Because these programs support the practice of human-in-the-loop control, they serve primarily to present information to mission controllers. The controller's job is to interpret the displayed information to monitor spacecraft and astronaut performance, taking decisions and control actions when necessary for mission success or crew safety.

A single mission controller clearly cannot monitor all 25,000 real-time measurements. The experience of human space flight has evolved into a practice of several mission control disciplines each monitoring and controlling several thousand measurements. The controllers arrange the measurements by function onto dozens of windowed displays each showing a few hundred measurements. Because of the limited screen size, only a few of the displays is visible at any moment. The problem is: how can we use the statistics gathered from previous Space Shuttle missions to automatically select from a list of candidates the most informative display?

In this paper, we first provide a geometric approach to solving this problem, and then show how this geometric approach can be generalized to take statistical information into consideration.

Original and updated files (both in Compressed PostScript).

Technical Report UTEP-CS-99-42, October 1999

A Geometric Approach to Classification of Trash in Ginned Cotton

Murali Siddaiah, Michael A. Lieberman, Nadipuram R. Prasad, and Vladik Kreinovich

Published in *Geombinatorics*, 2000, Vol. 10, No. 2,
pp. 77-82.

This paper discusses the use of geometric approach to classify different types of trash (non-lint, non-fiber material) in ginned cotton. Pieces of trash can have complicated shapes, so we would like to find a good approximating family of sets. Which approximating family is the best? We reduce the corresponding optimization problem to a geometric one: namely, we show that, under some reasonable conditions, an optimal family must be shift-, rotation- and scale-invariant. We then use this geometric reduction to conclude that the best approximating low-dimensional families consist of sets with linear or circular boundaries.

This result is in good agreement with the existing empirical classification of trash into bark1, bark2, leaf, and pepper trash.

File in Compressed PostScript and in pdf.

Technical Report UTEP-CS-99-41, October 1999

Updated version UTEP-CS-99-41a, March 2000

Fundamental Properties of Pair-Wise Interactions Naturally Lead to Quarks and Quark Confinement: A Theorem Motivated by Neural Universal Approximation Results

Vladik Kreinovich

In traditional mechanics, most interactions are pair-wise; if we omit one of the particles from our description, then the original pair-wise interaction can sometimes only be represented as interaction between triples, etc. It turns out that, vice versa, every possible interaction between N particles can be represented as pair-wise interaction if we represent each of the original N particles as a triple of new ones (and two new ones are not enough for this representation). The resulting three "particles" actually represent a single directly observable particles and in this sense, cannot be separated. So, this representation gives a fundamental reason for the three-quark model of basic barions, and explains quark confinement. The representation is based on a deep mathematical result (namely, a minor modification of Kolmogorov's solution to Hilbert's 13th problem) which has already been used in foundations of neural networks and in the foundations of physics - to explain why fundamental physical equations are of second order, and why all these fundamental equations naturally lead to non-smooth solutions like singularity.

Original Compressed PostScript file; updated file (in compressed PostScript).

Technical Report UTEP-CS-99-40, October 1999

Updated version UTEP-CS-99-40b, May 2000

A Comment on The Shape of The Solution Set for Systems of Interval Linear Equations with Dependent Coefficients

Goetz Alefeld, Vladik Kreinovich, Guenter Mayer, and Michael Huth

Published in *Reliable Computing*, 2001, Vol. 7, No. 3,
pp. 275-277.

Original compressed PostScript file; Updated version (compressed postscript)..

Technical Report UTEP-CS-99-39, October 1999

Updated version UTEP-CS-99-39a, January 2000

Complex Fuzzy Sets: Towards New Foundations

Hung T. Nguyen, Abraham Kandel, and Vladik Kreinovich

Published in
*Proceedings of the 9th IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE'2000)*, San Antonio, Texas, May 7-10, 2000,
Vol. 2, pp. 1045-1048.

Uncertainty of complex-valued physical quantities z=x+iy can be described by complex fuzzy sets. Such sets can be described by membership functions m(x,y) which map the universe of discourse (complex plane) into the interval [0,1]. The problem with this description is that it is difficult to directly translate into words from natural language. To make this translation easier, several authors have proposed to use, instead of a single membership function for describing the complex number, several membership functions which describe different real-valued characteristics of this numbers, such as its real part, its imaginary part, its absolute value, etc. The quality of this new description strongly depends on the choice of these real-valued functions, so it is important to choose them optimally. In this paper, we formulate the problem of optimal choice of these functions and show that, for all reasonable optimality criteria, the level sets of optimal functions are straight lines and circles. This theoretical result is in good accordance with our numerical experiments, according to which such functions indeed lead to a good description of complex fuzzy sets.

Original compressed PostScript file; Final version (compressed postscript)..

Technical Report UTEP-CS-99-38, October 1999

Updated version UTEP-CS-99-38a, December 1999

Intervals (Pairs of Fuzzy Values), Triples, etc.: Can We Thus Get an Arbitrary Ordering?

Vladik Kreinovich and Masao Mukaidono

Published in
*Proceedings of the 9th IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE'2000)*, San Antonio, Texas, May 7-10, 2000,
Vol. 1, pp. 234-238.

Traditional fuzzy logic uses real numbers as truth values. This description is not always adequate, so in interval-valued fuzzy logic, we use pairs (t-,t+) of real numbers, t-<=t+, to describe a truth value. To make this description even more adequate, instead of using real numbers to described each value t- and t+, we can use intervals, and thus get fuzzy values which can be described by 4 real numbers each. We can iterate this procedure again and again. The question is: can we get an arbitrary partially ordered set in this manner? An arbitrary lattice? In this paper, we show that although we cannot thus generate arbitrary lattices, we can actually generate an arbitrary partially ordered set in this manner. In this sense, the "intervalization" operation is indeed universal.

Original compressed PostScript file; Final version (compressed postscript)..

Technical Report UTEP-CS-99-37, October 1999

Updated version UTEP-CS-99-37a, January 2000

Shadows Of Fuzzy Sets - a Natural Approach Towards Describing 2-D and Multi-D Fuzzy Uncertainty in Linguistic Terms

Hung T. Nguyen, Berlin Wu, and Vladik Kreinovich

Published in
*Proceedings of the 9th IEEE International Conference on Fuzzy
Systems (FUZZ-IEEE'2000)*, San Antonio, Texas, May 7-10, 2000,
Vol. 1, pp. 340-345.

Fuzzy information processing systems start with expert knowledge
which is usually formulated in terms of words from natural
language. This knowledge is then usually reformulated in
computer-friendly terms of membership functions, and the system
transform these input membership functions into the membership
functions which describe the result of fuzzy data processing. It is
then desirable to translate this fuzzy information back from the
computer-friendly membership functions language to the human-friendly
natural language. In general, this is difficult even in
a 1-D case, when we are interested in a single
quantity y; however, the fuzzy research community has
accumulated some expertise of describing the resulting
1-D membership functions by words from natural language.
The problem becomes even more
complicated in 2-D and multi-D cases, when we are interested in
several quantities y1,...,ym, because there are fewer words
which describe the relation between several quantities than words
describing a single quantity.
To reduce this more complicated multi-D problem to a simpler (although
still difficult) 1-D case,
L. Zadeh proposed, in 1966, to use words to describe
fuzzy information about different *combinations*
y=f(y1,...,ym) of the desired variables. This idea is similar
to the use of marginal distributions in probability theory.
The corresponding
terms are called *shadows* of the original fuzzy set. The main
question is: do we lose any information in this translation? Zadeh has
shown that under certain conditions, the original fuzzy set can be
uniquely reconstructed from its shadows. In this paper, we prove that
for appropriately chosen shadows, the reconstruction is *always*
unique. Thus, if we manage to describe the original membership function
by linguistic terms which describe different
combinations y, this description is lossless.

Original compressed PostScript file; Final version (compressed postscript)..

Technical Report UTEP-CS-99-36a, October 2000

Updated version UTEP-CS-99-36b, March 2001

Interval Computations as a Particular Case of a General Scheme Involving Classes of Probability Distributions

Scott Ferson, Lev Ginzburg, Vladik Kreinovich, and Harry Schulte

Published in:
Juergen Wolff von Gudenberg and Walter Kraemer (eds.),
*Scientific Computing, Validated Numerics, Interval Methods,*
Kluwer, Dordrecht, 2001, pp. 355-366.

Original file in Compressed
PostScript;

updated version in
Compressed PostScript and
pdf.

Technical Report UTEP-CS-99-36, October 1999

From Interval Methods of Representing Uncertainty to a General Description of Uncertainty

Vladik Kreinovich, Scott Ferson, Lev Ginzburg, Harry Schulte, Matthew R. Barry, and Hung T. Nguyen

In: Hrushikesha Mohanty and Chitta Baral (eds.),
*Trends in Information Technology,
Proceeedings of the International Conference on
Information Technology ICIT'99, Bhubaneswar, India,
December 20-22, 1999,* Tata McGraw-Hill, New Delhi, 2000,
pp. 161-166.

Measurements do not result in an exact value of the measured quantity; even after the most accurate measurement, there is still some uncertainty about the actual value of the measured quantity. Traditionally, in science and engineering, this uncertainty is characterized by a probability distribution; however, often, we do not know this probability distribution exactly. So, to get a more adequate description of this uncertainty, we must consider classes of possible probability distributions. A natural question is: Are all possible classes needed for this description? In this paper, we show that even for simple situations, we indeed need arbitrary closed convex classes of probability distributions.

Technical Report UTEP-CS-99-35, October 1999

On Combining Statistical and Fuzzy Techniques: Detection of Business Cycles from Uncertain Data

Hung T. Nguyen, Berlin Wu, and Vladik Kreinovich

In: Hrushikesha Mohanty and Chitta Baral (eds.),
*Trends in Information Technology,
Proceeedings of the International Conference on
Information Technology ICIT'99, Bhubaneswar, India,
December 20-22, 1999,* Tata McGraw-Hill, New Delhi, 2000,
pp. 69-74.

Detecting the beginning and the end of the business cycle is an important and difficult economic problem. One of the reasons why this problem is difficult is that for each year, we have only expert estimates (subjective probabilities) indicating to what extent the economy was in growth or recession. In our previous papers, we used fuzzy techniques to process this uncertain information; namely, we used the operation min(a,b) to combine the subjective probabilities (expert estimates) of two events into a probability that both events happen. This function corresponds to the most optimistic estimate of the joint probability. In this paper, we use another operation which corresponds to the most cautious (pessimistic) estimate for joint probability. It turns out, unexpectedly, that as we get better extrapolations for subjective probabilities, the resulting change times become fuzzier and fuzzier until, for the best (least sensitive) extrapolation, we get the largest fuzziness. We explain this phenomenon by showing that in the presence of noise, an arbitrary continuous (e.g., fuzzy) system can be well described by its discrete analog, but as the description gets more accurate, the continuous description becomes necessary.

Technical Report UTEP-CS-99-34, October 1999

Candidate Sets for Complex Interval Arithmetic

Juergen Wolff von Gudenberg and Vladik Kreinovich

In: Hrushikesha Mohanty and Chitta Baral (eds.),
*Trends in Information Technology,
Proceeedings of the International Conference on
Information Technology ICIT'99, Bhubaneswar, India,
December 20-22, 1999,* Tata McGraw-Hill, New Delhi, 2000,
pp. 230-233.

Uncertainty of measuring complex-valued physical quantities can be described by complex sets. These sets can have complicated shapes, so we would like to find a good approximating family of sets. Which approximating family is the best? We reduce the corresponding optimization problem to a geometric one: namely, we prove that, under some reasonable conditions, an optimal family must be shift-, rotation- and scale-invariant. We then use this geometric reduction to conclude that the best approximating low-dimensional families consist of sets with linear or circular boundaries. This result is consistent with the fact that such sets have indeed been successful in computations. It stimulates to study further candidates.

Technical Report UTEP-CS-99-33, October 1999

From Fuzzy Values to Intuitionistic Fuzzy Values to Intuitionistic Fuzzy Intervals Etc.: Can We Get an Arbitrary Ordering?

Vladik Kreinovich, Masao Mukaidono, and Krassimir Atanassov

Published in *Notes on Intuitionistic Fuzzy Sets*, 1999, Vol. 5,
No. 3, pp. 11-18.

Traditional fuzzy logic uses real numbers as truth values. This description is not always adequate, so in intuitionistic fuzzy logic, we use pairs of real numbers to describe a truth value. Such pairs can be described either as pairs (t,f) for which t+f<=1, or, alternatively, as pairs (t,1-f) for which t<=1-f. To make this description even more adequate, instead of using real numbers to described each value t and f, we can use intervals, and thus get interval-valued intuitionistic fuzzy values which can be described by 4 real numbers each. We can iterate this procedure again and again. The question is: can we get an arbitrary partially ordered set in this manner? An arbitrary lattice? In this paper, we show that although we cannot thus generate arbitrary lattices, we can actually generate an arbitrary partially ordered set in this manner. In this sense, the "intervalization" operation which underlies the notion of an intuitionistic fuzzy set, is indeed universal.

Technical Report UTEP-CS-99-32, August 1999

Kolmogorov Complexity-Based Ideas for Locating Text in Web Images

Martin Schmidt, Vladik Kreinovich, and Luc Longpre

Published in: J. Ramirez-Angulo (ed.),
*Proceedings of the 1999 IEEE Midwest Symposium
on Circuits and Systems*, August 8-11, 1999, Las Cruces, New
Mexico, Vol. 1, pp. 543-546.

The gaining popularity of the World Wide Web increases security risks. Search tools monitor plain text in web pages, but search for text in graphical images is still difficult. For this search, we use the fact that the compressed images with text have different size than images without text.

Technical Report UTEP-CS-99-31, August 1999

Fuzzy Logic in Non-Destructive Testing of Aerospace Structures

Murali Krishna, Vladik Kreinovich, and Roberto Osegueda

Published in: J. Ramirez-Angulo (ed.),
*Proceedings of the 1999 IEEE Midwest Symposium
on Circuits and Systems*, August 8-11, 1999, Las Cruces, New
Mexico, Vol. 1, pp. 431-434.

In nondestructive testing, to locate the faults, we send an ultrasonic signal and measure the resulting vibration at different points. To describe and combine the uncertainty corresponding to different measurements and fuzzy estimates, we used fuzzy logic. As a result, we get reasonably simple computational models which lead to as good fault detection as the known more complicated models.

Technical Report UTEP-CS-99-30, August 1999

Neural Network Approach to Speech Pathology

Antonio P. Salvatore, Nicole A. Thorne, and Charlotte M. Gross

Published in: J. Ramirez-Angulo (ed.),
*Proceedings of the 1999 IEEE Midwest Symposium
on Circuits and Systems*, August 8-11, 1999, Las Cruces, New
Mexico, Vol. 1, pp. 439-442.

A speech problem can be caused by different reasons, from psychological to organic. The existing diagnostic of speech pathologies relies on skilled doctors who can often diagnose by simply listening to the patient. We show that neural networks can simulate this ability and thus provide an automated (preliminary) diagnosis.

Technical Report UTEP-CS-99-29, August 1999

Updated Version UTEP-CS-99-29b, May 2000

The Use of Fuzzy Measures in Pain Relief Control

Vladik Kreinovich and Nadipuram R. Prasad

Preliminary version
published in *Proceedings of the International Symposium on
Medical Informatics and Fuzzy Technology MIF'99*,
Hanoi, Vietnam, August 27-29, 1999, pp. 447-453;
full paper published in *Biomedical Soft Computing and Human
Sciences*, 2000, Vol. 5, No. 2, pp. 23-30.

Many people suffer from a continuous strong pain which is caused solely by the malfunction of the pain mechanism itself. One way to ease their pain is to electrically stimulate the spinal cord. Since the equations of pain are not known, we must use heuristic methods to find the optimal pain relief stimulation. In this paper, we show how fuzzy measures and similar nonlinear models can be used in pain relief control: they can be used to determine the parameters of the model which describes the dependence of the pain relief on the applied stimulation. Thus, fuzzy measures lead to the determination, for a given pain distribution, of the optimal pain relief stimulation.

Original PostScript file; Final version (compressed postscript).

Technical Report UTEP-CS-99-28, August 1999

Theoretical Explanation for the Empirical Probability Of Detection (POD) Curve: A Neural Network-Motivated Approach

Yohans Mendoza and Roberto Osegueda

Published in: J. Ramirez-Angulo (ed.),
*Proceedings of the 1999 IEEE Midwest Symposium
on Circuits and Systems*, August 8-11, 1999, Las Cruces, New
Mexico, Vol. 1, pp. 435-438.

For non-destructive testing of aerospace structures, it is extremely important to know how the probability of detecting a fault depends on its size. Recently, an empirical formula has been found which described this dependence. In this paper, we provide the theoretical justification for this formula by using methods motivated by the neural network approach.

File in Compressed PostScript file and in pdf

Technical Report UTEP-CS-99-27, August 1999

Chu Spaces - a New Approach to Describing Uncertainty in Systems

Vladik Kreinovich, Guoqing Liu, and Hung T. Nguyen

Published in: J. Ramirez-Angulo (ed.),
*Proceedings of the 1999 IEEE Midwest Symposium
on Circuits and Systems*, August 8-11, 1999, Las Cruces, New
Mexico, Vol. 1, pp. 427-430.

This paper proposes the use of a specific type of categories for modeling and fusing information in complex systems in which uncertainty of various types need to be taken into account.

Technical Report UTEP-CS-99-26, July 1999

Updated version UTEP-CS-99-26b, July 2001

Automatic Concurrency in SequenceL

Daniel E. Cooke and Vladik Kreinovich

Published in *Electronic Notes in Theoretical Computer Science*,
2000, Vol. 25; extended version published in
*Science of Computer Programming*, 2002, Vol. 42, No. 1,
pp. 115-128.

This paper presents a programming language which we believe to be most appropriate for the automation of parallel data processing, especially data processing of concern to the oil industry and to the U.S. Federal Agencies involved in the analysis of Satellite Telemetry Data. Focus is placed upon major language issues facing the development of the information power grid. The paper presents an example of the type of parallelism desired in the Grid. To implement this parallelism in such a language as Java we need to specify parallelism explicitly. We show that if we rewrite the same solution in the high level language SequenceL, then parallelism becomes implicit. SequenceL seems therefore to be a good candidate for a Grid Oriented Language, because its abstraction relieves the problem solver of much of the burden normally required in development of parallel problem solutions.

Compressed PostScript file; extended version in Compressed Postscript and pdf.

Technical Report UTEP-CS-99-25, June 1999

Fuzzy/Probability ~ Fractal/Smooth

Hung T. Nguyen, Vladik Kreinovich, and Berlin Wu

Published in the *International Journal of Uncertainty, Fuzziness,
and Knowledge-Based Systems (IJUFKS)*, Vol. 7, No. 4,
pp. 363-370.

Many applications of probability theory are based on the
assumption that, as the number of cases increase, the
relative frequency of cases with a certain property
tends to a number - *probability* that this property is true.
L. Zadeh has shown that in many real-life situations,
the frequency oscillates and does not converge at all.
It is very difficult to describe such situations
by using methods from
traditional probability theory. Fuzzy logic is not based on any
convergence assumptions and therefore, provides a natural
description of such situations. However,
a natural next question arises: how can we describe this oscillating
behavior? Since we cannot describe it by using a *single*
parameter (such as probability), we need to use a *multi-D* formalism.
In this paper, we describe an optimal formalism for describing such
oscillations, and show that it complements traditional probability
techniques in the same way as fractals complement smooth curves and
surfaces.

Technical Report UTEP-CS-99-24, June 1999

Updated version UTEP-CS-99-24a, May 2000

A New Graph Characteristic and its Application to Numerical Computability

Frank Harary, Vladik Kreinovich, and Luc Longpre

Published in *Information Processing Letters*, 2001,
Vol. 77, No. 5-6, pp. 277-282.

Many traditional numerical algorithms include a step on which we check
whether a given real number *a* is equal to 0. This checking is easy
for rational numbers, but for constructive *real* numbers,
whether a number is 0 or not is an algorithmically undecidable
problem. It is therefore desirable to re-formulate the existing
algorithms with as few such comparisons as possible. We
describe a new graph characteristic; this characteristic
describes how the number of comparisons in an algorithm
can be reduced.

Original file; updated file (both in Compressed PostScript).

Technical Report UTEP-CS-99-23, June 1999

Multi-Resolution Methods in Non-Destructive Testing of Aerospace Structures and in Medicine

Roberto Osegueda, Yohans Mendoza, Olga Kosheleva, and Vladik Kreinovich

Published in the Proceedings of the 14th IEEE International Symposium on Intelligent Control/Intelligent Systems and Semiotics ISIC/ISAS'99, September 15-17, 1999, Cambridge, Massachusetts, USA, pp. 207-212.

A fault in an aerospace structure can lead to catastrophic consequences; therefore, it is extremely important to test these structures regularly. Thorough testing of a huge aerospace structures results in a large amount of data, and processing this data takes a lot of time. To decrease the processing time, we use a "multi-resolution" technique, in which we first separate the data into data corresponding to different vibration modes, and then combine these data together. There are many possible ways to transform each mode's data into the probability of a fault, and many possible way of combining these mode-based probabilities; different approaches lead to different results. In this paper, we show how a general methodology for choosing the optimal uncertainty representation can be used to find the optimal uncertainty representations for this particular problem. Namely, we we show that the problem of finding the best approximation to the probability of detection (POD) curve (describing the dependence of probability p(a) of detection on the size a of the fault) can be solved similarly to the problem of finding the best activation function in neural networks. A similar approach can be used in detecting faults in medical images (e.g., in mammography).

File in Compressed PostScript and in pdf.

Technical Report UTEP-CS-99-22, June 1999

Multi-Resolution Techniques in the Rules-Based Intelligent Control Systems: A Universal Approximation Result

Yeung Yam, Hung T. Nguyen, and Vladik Kreinovich

Published in the Proceedings of the 14th IEEE International Symposium on Intelligent Control/Intelligent Systems and Semiotics ISIC/ISAS'99, September 15-17, 1999, Cambridge, Massachusetts, USA, pp. 213-218.

Intelligent control is a very successful method of transforming expert knowledge of control rules (formulated in terms of natural language, like "small") into a precise control strategy. It has led to many spectacular applications, ranging from appliances to automatic subway control to super-precise temperature control on a Space Shuttle mission.

It is known that fuzzy control is a *universal approximator*,
i.e., that it can approximate every possible control strategy
within an arbitrary accuracy.
One of the main problems of fuzzy control
is that the number of rules which are
necessary to represent a given control strategy with a given accuracy,
grows exponentially with the increase in accuracy. As a result, for
reasonable accuracy, and a reasonable number of input variable, we
sometimes need astronomically many rules.

In this paper, we start to solve this problem by pointing out that traditional one-step fuzzy rule bases, in which expert rules directly express control in terms of the input, are often a simplification of the actual multi-step expert reasoning. We show that a natural formalization of such expert reasoning leads to a universal approximation result in which the number of control rules does not increase with the increase in accuracy. Thus, this multi-resolution approach looks like a promising solution to the rule base explosion problem.

Technical Report UTEP-CS-99-21, May 1999

Chu Spaces - a New Approach to Diagnostic Information Fusion

Hung T. Nguyen, Vladik Kreinovich, and Berlin Wu

Published in the *Proceedings of the 2nd International Conference on
Information Fusion FUSION'99*, Sunnyvale, CA, July 6-8, 1999,
Vol. 1, pp. 323-330.

This paper is rather theoretical. Its aim is to describe a general algebraic framework, known as Chu spaces, in which different type of information can be transformed into the same form, so that fusion procedures can be investigated in a single general framework.

Technical Report UTEP-CS-99-20, May 1999

On Average Bit Complexity of Interval Arithmetic

Chadi Hamzo and Vladik Kreinovich

Published in:
*Bulletin of the European Association for Theoretical
Computer Science (EATCS)*, 1999, Vol. 68, pp. 153-156.

In many practical situations, we know only the intervals which contain
the actual (unknown) values of physical quantities. If we know the
intervals [x] for a quantity x and [y] for another quantity
y, then, for every arithmetic operation *,
the set of possible values of x*y also forms an interval; the
operations leading from [x] and [y] to this new interval are
called *interval arithmetic* operations. For addition and
subtraction, corresponding interval operations consist of two
corresponding operations with real numbers, so there is no hope of
making them faster.

The best known algorithms for interval multiplication consists of 3 real-number multiplications and several comparisons. We describe a new algorithm for which the average time is equivalent to using only 2 multiplications of real numbers.

File in Compressed PostScript and pdf.

Technical Report UTEP-CS-99-19, April 1999

We Live in the Best of Possible Worlds: A Proof

Liu Guoqing and Vladik Kreinovich

It is well known that equations of motions and equations which describe the dynamics of physical fields can be deduced from the condition the action S (determined by the corresponding Lagrange function) is optimal. In other words, there is an optimality criterion on the set of all trajectories, and the actual trajectory is optimal with respect to this criterion.

The next reasonable question is: where does this optimality criterion on the set of all trajectories (i.e., the corresponding Lagrange function) come from? It is reasonable to assume that (similarly) on the set of all Lagrange functions, there is an optimality criterion, and the actual Lagrangian is optimal with respect to this criterion.

In this paper, we show that, under reasonable conditions on this optimality criterion, this approach leads to the standard Lagrange functions for General Relativity, Quantum Mechanics, Electrodynamics, etc. Thus, the Lagrange functions (and hence equations) of our world are indeed the best possible.

Technical Report UTEP-CS-99-18, March 1999

An Optimality Criterion for Arithmetic of Complex Sets

Vladik Kreinovich and Juergen Wolff von Gudenberg

Published in *Geombinatorics*,
2000, Vol. 10, No. 1, pp. 31-37.

Uncertainty of measuring complex-valued physical quantities can be described by complex sets. These sets can have complicated shapes, so we would like to find a good approximating family of sets. Which approximating family is the best? We reduce the corresponding optimization problem to a geometric one: namely, we prove that, under some reasonable conditions, an optimal family must be shift-, rotation- and scale-invariant. We then use this geometric reduction to conclude that the best approximating low-dimensional families consist of sets with linear or circular boundaries. This result is consistent with the fact that such sets have indeed been successful in computations.

Technical Report UTEP-CS-99-17a, November 2000

Chu Spaces: Towards New Foundations for Fuzzy Logic and Fuzzy Control, with Applications to Information Flow on the World Wide Web

Nhu Nguyen, Hung T. Nguyen, Berlin Wu, Vladik Kreinovich, and Guoqing Liu

Published in *Journal of Advanced Computational Intelligence
and Intelligent Informatics (JACIII)*, 2001, Vol. 5, No. 3, pp. 149-156.

We show that Chu spaces, a new formalism used to describe parallelism and information flow, provide uniform explanations for different choices of fuzzy methodology, such as choices of fuzzy logical operations, of membership functions, of defuzzification, etc.

Technical Report UTEP-CS-99-17, March 1999

A New Look at Fuzzy Theory via Chu Spaces

Hung T. Nguyen, Berlin Wu, and Vladik Kreinovich

Published in the *Proceedings of The Eighth International Fuzzy
Systems Association World Congress IFSA'99,* Taipei, Taiwan, August
17-20, 1999, pp. 237-240.

We propose to use Chu categories as a general framework for uncertainty analysis, with a special attention to fuzzy theory. We emphasize the fact that by viewing fuzzy concepts as Chu spaces, we can discover new aggregation operators, and model interactions and relationship between fuzzy data; these possibilities are due, in essence, to the category structure of Chu spaces, and especially to their morphisms. This paper is a tutorial introduction to the subject.

Technical Report UTEP-CS-99-16, March 1999

From Fuzzy Models to Fuzzy Control

Chitta Baral, Vladik Kreinovich, Hung T. Nguyen, and Yeung Yam

Published in the *Proceedings of The Eighth International Fuzzy
Systems Association World Congress IFSA'99,* Taipei, Taiwan, August
17-20, 1999, pp. 246-250.

Traditional (non-fuzzy) control methodology deals with situations when we know exactly how the system behaves and how it will react to different controls, and we want to choose an appropriate control strategy. This methodology enables us to transform the description of the plant's (system's) behavior into an appropriate control strategy.

In many practical situations, we do not have the exact knowledge of the system's behavior, but we have expert-supplied fuzzy rules which describe this behavior. In such situations, it is desirable to transform these description rules into rules describing control. There exist several reasonable heuristics for such transformation; however, the lack of formal justification restricts their applicability. In this paper, we provide a justification for the most natural of the known heuristics: whenever we have a description rule "if A(x) and B(u) then C(x')", and we want to achieve x'=d(x), add a control rule "if A(x) and C(d(x)), then B(u)".

Technical Report UTEP-CS-99-15, March 1999

Towards Intelligent Virtual Environment for Training Medical Doctors in Surgical Pain Relief

Richard Alo, Kenneth Alo, and Vladik Kreinovich

Published in the *Proceedings of The Eighth International Fuzzy
Systems Association World Congress IFSA'99,* Taipei, Taiwan, August
17-20, 1999, pp. 260-264.

Chronic pain is a serious health problem affecting millions of people
worldwide. Spinal cord stimulation is one of the most
effective methods of easing the chronic pain. For most patients, a
careful selection of weak electric currents drastically
decreases the pain level.
Engineering progress leads to
more and more flexible devices that offer a wide variety of millions
of possible simulation regimes. It is not
possible to test all of them on each patient, we need an
*intelligent*
method of choosing an appropriate simulation regime.
In this paper, we describe the need for
an intelligent virtual environment for
training medical doctors in surgical pain relief; specifically, we
show that the design of such a system will drastically
speed up the doctor's training and enhance their training skills.

File in Compressed PostScript and in pdf.

Technical Report UTEP-CS-99-14, March 1999

System Reliability: A Case When Fuzzy Logic Enhances Probability Theory's Ability to Deal With Real-World Problems

Timothy J. Ross, Carlos Ferregut, Roberto Osegueda, and Vladik Kreinovich

Published in *Proceedings of
The 18th International Conference
of the
North American Fuzzy Information Society NAFIPS'99*,
New York City,
June 10-12, 1999, pp. 81-84.

In his recent paper "Probability theory needs an infusion of fuzzy logic to enhance its ability to deal with real-world problems", L. Zadeh explains that probability theory needs an infusion of fuzzy logic to enhance its ability to deal with real-world problems. In this talk, we give an example of a real-world problem for which such an infusion is indeed successful: the problem of system reliability.

Technical Report UTEP-CS-99-13, March 1999

Towards Faster, Smoother, and More Compact Fuzzy Approximation, with an Application to Non-Destructive Evaluation of Space Shuttle's Structural Integrity

Yeung Yam, Roberto Osegueda and Vladik Kreinovich

Published in *Proceedings of
The 18th International Conference
of the
North American Fuzzy Information Society NAFIPS'99*,
New York City,
June 10-12, 1999, pp. 243-247.

It is known that fuzzy systems are universal approximators, i.e., any input-output system can be approximated, within any given accuracy, by a system described by fuzzy rules. Fuzzy rules work well in many practical applications. However, in some applications, the existing fuzzy rule approximation techniques are not sufficient:

First, in many practical problems (e.g., in many control
applications), *derivatives* of the approximated function are very
important, and so, we want not only the approximating function to be
close to the approximated one, but we also want their derivatives to be
close; however, standard fuzzy approximation techniques do not
guarantee the accuracy of approximating a derivative.

Second, to get the desired approximation accuracy, we sometimes need unrealistically many rules.

In this talk, we show how both problems can be solved.

Technical Report UTEP-CS-99-12, March 1999

Chu Spaces: Towards New Foundations for Fuzzy Logic and Fuzzy Control, with Applications to Information Flow on the World Wide Web

Hung T. Nguyen, Vladik Kreinovich, and Guoqing Liu

Published in *Proceedings of
The 18th International Conference
of the
North American Fuzzy Information Society NAFIPS'99*,
New York City,
June 10-12, 1999, pp. 766-780.

We show that Chu spaces, a new formalism used to describe parallelism and information flow, provide uniform explanations for different choices of fuzzy methodology, such as choices of fuzzy logical operations, of membership functions, of defuzzification, etc.

Technical Report UTEP-CS-99-11, March 1999

Arithmetic of Complex Sets: Nickel's Classical Paper Revisited from a Geometric Viewpoint

Vladik Kreinovich and Juergen Wolff von Gudenberg

Published in *Geombinatorics*, 1999, Vol. 9,
No. 1, pp. 21-26.

Due to measurement uncertainty, after
measuring a value of a physical quantity (or quantities), we do not
get its exact value, we only get a *set* of possible values of
this quantity (quantities). In case of 1-D quantities, we get an
*interval* of possible values. It is known that the family of all
real intervals is closed under point-wise arithmetic operations
(+,-,*) (i.e., this family forms an *arithmetic*).
This closeness is efficiently used to estimate the
set of possible values for y=f(x1,...,xn) from the known sets
of possible values for xi.

In some practical problems, physical
quantities are complex-valued; it is therefore desirable to find a
similar closed family (arithmetic) of *complex* sets. We follow
K. Nickel's 1980 paper to show
that, in contrast to 1-D interval case, there is no
finite-dimensional arithmetic.

We prove this result by reformulating it as a geometric problem of finding a finite-dimensional family of planar sets which is closed under Minkowski addition, rotation, and dilation.

Technical Report UTEP-CS-99-10, March 1999

Updated version UTEP-CS-99-10a, November 1999

For Interval Computations, if Absolute-Accuracy Optimization is NP-Hard, then so is Relative-Accuracy Optimization

Vladik Kreinovich

One of the basic problems of interval computations is to compute a
range of a given function f(x1,...,xn) over a given box
(i.e., to compute the maximum and the minimum of the
function on the box). For many classes of functions (e.g., for
quadratic functions) this problem is NP-hard; it is even NP-hard if
instead of computing the minimum and maximum *exactly*, we want to
compute them with a given (absolute) *accuracy*. In
practical situations, it is more realistic to ask for a *relative*
accuracy; are the corresponding problems still NP-hard?
We show that under some reasonable conditions, NP-hardness of
absolute-accuracy optimization implies that relative-accuracy
optimization is NP-hard as well.

Original pdf file; Final version in pdf.

Technical Report UTEP-CS-99-9, March 1999

Locating the Whole Pattern is Better than Locating its Pieces: A Geometric Explanation of an Empirical Phenomenon

Scott A. Starks and Vladik Kreinovich

Published in
*Geombinatorics*, 1999, Vol. 8, No. 4, pp. 116-121.

In many practical problems, we must find a pattern in an image. For situations in which the desired pattern consists of several simple components, the traditional approach is first to look for such components, and then to see whether the relative locations of these components are consistent with the pattern. Recent experiments have shown that a much more efficient pattern recognition can be achieved if we look for the whole pattern (without decomposing it first). In this paper, we give a simple geometric explanation of this empirical fact.

Technical Report UTEP-CS-99-8, January 1999

Why Fundamental Physical Equations are of Second Order?

Takeshi Yamakawa and Vladik Kreinovich

Published in *International Journal of Theoretical Physics*,
1999, Vol. 38, No. 6, pp. 1763-1770.

In this paper, we use a deep mathematical result (namely, a minor modification of Kolmogorov's solution to Hilbert's 13th problem) to explain why fundamental physical equations are of second order. This same result explain why all these fundamental equations naturally lead to non-smooth solutions like singularity.

File in PostScript and in pdf.

Technical Report UTEP-CS-99-7, January 1999

Extending t-Norms Beyond [0,1]: Relevant Results of Semigroup Theory

Vladik Kreinovich and Yeung Yam

Published in *BUlletin for Studies and Exchanges on
Fuzziness and its AppLications (BUSEFAL)*,
Vol. 78, Spring 1999, pp. 12-16.

Originally, fuzzy logic was proposed to describe human reasoning. Lately, it turned out that fuzzy logic is also a convenient approximation tool, and that moreover, sometimes a better approximation can be obtained if we use real values outside the interval [0,1]; it is therefore necessary to describe possible extension of t-norms and t-conorms to such new values. It is reasonable to require that this extension be associative, i.e., that the set of truth value with the corresponding operation form a semigroup. Semigroups have been extensively studied in mathematics. In this short paper, we describe several results from semigroup theory which we believe to be relevant for the proposed extension of t-norms and t-conorms.

Technical Report UTEP-CS-99-6, January 1999

Time-Bounded Kolmogorov Complexity May Help in Search for Extra Terrestrial Intelligence (SETI)

Martin Schmidt

One of the main strategies in Search for Extra Terrestrial Intelligence (SETI) is trying to overhear communications between advanced civilizations. However, there is a (seeming) problem with this approach: advanced civilizations, most probably, save communication expenses by maximally compressing their messages, and the notion of a maximally compressed message is naturally formalized as a message x for which Kolmogorov complexity C(x) is close to its length l(x), i.e., as a "random" message. In other words, a maximally compressed message is indistinguishable from the truly random noise, and thus, trying to detect such a message does not seem to be a working SETI strategy.

We show that this argument does not take into consideration the {\it time} necessary for compression and decompression of message. If we take this time into consideration, and therefore consider time-bounded versions of Kolmogorov complexity, then the above ``problem'' disappears. We also show which version of time-bounded Kolmogorov complexity is most appropriate for formalizing SETI strategies.

Technical Report UTEP-CS-99-5, January 1999

Beyond [0,1] to Intervals and Further: Do We Need All New Fuzzy Values?

Yeung Yam, Masao Mukaidono, and Vladik Kreinovich

Published by
The Chinese University of Hong Kong,
Department of Mechanical and Automation Engineering, as
Technical Report CUHK-MAE-99-004, January 1999,
and in the *Proceedings of The Eighth International Fuzzy
Systems Association World Congress IFSA'99,* Taipei, Taiwan, August
17-20, 1999, pp. 143-146.

In many practical applications of fuzzy methodology, it is desirable to go beyond the interval [0,1] and to consider more general fuzzy values: e.g., intervals, or real numbers outside the interval [0,1]. When we increase the set of possible fuzzy values, we thus increase the number of bits necessary to store each degree, and therefore, increase the computation time which is needed to process these degrees. Since in many applications, it is crucial to get the result on time, it is therefore desirable to make the smallest possible increase. In this paper, we describe such smallest possible increases.

Technical Report UTEP-CS-99-4, January 1999

Intervals is All We Need: An Argument

Masao Mukaidono, Yeung Yam, and Vladik Kreinovich

Published by
The Chinese University of Hong Kong,
Department of Mechanical and Automation Engineering, as
Technical Report CUHK-MAE-99-005, January 1999, and in
the *Proceedings of The Eighth International Fuzzy
Systems Association World Congress IFSA'99,* Taipei, Taiwan, August
17-20, 1999, pp. 147-150.

In many practical applications of fuzzy methodology, it is desirable to go beyond the interval [0,1] and to consider more general fuzzy values: e.g., intervals, or more general sets of values. In this paper, we show that under some reasonable assumptions, there is no need to go beyond intervals.

Technical Report UTEP-CS-99-3, January 1999

Why Clustering in Function Approximation? Theoretical Explanation

Vladik Kreinovich and Yeung Yam

Preliminary version published by
The Chinese University of Hong Kong,
Department of Mechanical and Automation Engineering, as
Technical Report CUHK-MAE-99-001, January 1999; full version
published in *International Journal of Intelligent Systems*,
2000, Vol. 15, No. 10, pp. 959-966.

Function approximation is a very important practical problem:
in many practical applications, we know the exact
form of the functional dependence y=f(x1,...,xn) between
physical quantities, but this exact dependence is complicated, so we
need a lot of computer space to store it, and a lot of time to process
it, i.e., to predict y from the given xi. It is therefore
necessary to find a simpler *approximate* expression
g(x1,...,xn) for this same
dependence. This problem has been analyzed in numerical mathematics
for several centuries, and it is, therefore, one of the most
thoroughly analyzed problems of applied mathematics. There are many
results related to approximation by polynomials, trigonometric
polynomials, splines of different type, etc. Since this problem has
been analyzed for so long, no wonder that for many reasonable
formulations of the optimality criteria, the corresponding problems of
finding the optimal approximations have already been solved.

Lately, however, new
*clustering-related* techniques have been applied to
solve this problem (by Yager, Filev, Chu, and others).
At first glance, since for most traditional optimality criteria, optimal
approximations are already known, clustering approach can only lead to
non-optimal approximations, i.e., approximations of inferior
quality. We show, however, that there exist new reasonable criteria
with respect to which clustering-based function approximation is
indeed the optimal method of function approximation.

File in Compressed PostScript and pdf.

Technical Report UTEP-CS-99-2, January 1999

Fuzzy Systems are Universal Approximators for a Smooth Function and its Derivatives

Vladik Kreinovich, Hung T. Nguyen, and Yeung Yam

Preliminary version published as
The Chinese University of Hong Kong,
Department of Mechanical and Automation Engineering, as
Technical Report CUHK-MAE-99-002, January 1999; full version published
in *International Journal of Intelligent Systems*, 2000,
Vol. 15, No. 6, pp. 565-574.

One of the reasons why fuzzy methodology is successful is that fuzzy systems are universal approximators, i.e., that we can approximate an arbitrary continuous function within any given accuracy by a fuzzy system. In some practical applications (e.g., in control), it is desirable to approximate not only the original function, but also its derivatives (so that, e.g., a fuzzy control approximating a smooth control will also be smooth). In our paper, we show that for any given accuracy, we can approximate an arbitrary smooth function by a fuzzy systems so that not only the function is approximated within this accuracy, but its derivatives are approximated as well. In other words, we prove that fuzzy systems are universal approximators for smooth functions and their derivatives.

File in PostScript and in pdf.

Technical Report UTEP-CS-99-1, January 1999

Updated version UTEP-CS-99-1b, May 1999

Final version UTEP-CS-99-1d, August 2000

Computational Complexity of Planning and Approximate Planning in Presence of Incompleteness

Chitta Baral, Vladik Kreinovich, and Raul Trejo

A short version published in
*Proceedings of the
Sixteenth International Joint Conference on Artificial Intelligence
IJCAI'99*, Stockholm, Sweden, July 31 - August 6, 1999, Vol. 2,
pp. 948-953; full paper published in *Artificial
Intelligence*, 2000, Vol. 112, pp. 241-267.

In the last several years, there have been several studies about
the computational complexity of classical planning assuming that
the planner has complete knowledge about the initial situation.
Recently, there has been proposal to use "sensing" actions to plan
in presence of incompleteness. In this paper we study the
complexity of planning in such cases. In our study we use the
action description language *A* proposed in 1991 by Gelfond
and Lifschitz, and its extensions.

It is known that if we consider only plans of feasible
(polynomial) length, planning - with complete information about
the initial situation - in *A* is NP-complete: even
checking whether a given objective is attainable from a given
initial state is NP-complete. In this paper, we show that
the planning problem in presence of incompleteness is indeed
harder: it belongs to the next level of complexity hierarchy (in
precise terms, it is Sigma_2 P-complete). To overcome the
complexity of this problem, Baral and Son have proposed several
approximations. We show that under certain conditions, one of
these approximations - 0-approximation - makes the problem
NP-complete (thus indeed reducing its complexity).

Original file in
PostScript and
pdf;

updated version (in compressed
PostScript);

final version in
PostScript and
pdf.