Computer Science Department

Abstracts of 1997 Reports

Technical Report UTEP-CS-97-38, December 1997

Why Kolmogorov Complexity in Physical Equations

Vladik Kreinovich and Luc Longpre

Published in
*International Journal of Theoretical Physics*, 1998, Vol. 37,
No. 11, pp. 2791-2801.

Several researchers, including M. Gell-Mann, argue that the notion of Kolmogorov complexity, developed in the algorithmic information theory, is useful in physics (i.e., in the description of the physical world). Their arguments are rather convincing, but there seems to be a gap between traditional physical equations and Kolmogorov complexity: namely, it is not clear how the standard equations of physics can lead to algorithmic notions underlying Kolmogorov complexity. In this paper, this "gap" is bridged: we explain how Kolmogorov complexity naturally appear in physical equation.

Technical Report UTEP-CS-97-37, December 1997

Updated version UTEP-CS-97-37a, July 1998

Kolmogorov Complexity Justifies Software Engineering Heuristics

Ann Q. Gates, Vladik Kreinovich, and Luc Longpre

Published in *Bulletin of the European
Association for Theoretical Computer Science (EATCS)*,
1998, Vol. 66, pp. 150-154.

The "clean bill of health" produced by such a technique does not guarantee that the program is actually correct. In this paper, we show that several heuristic techniques for software testing that have been developed in software engineering can be rigorously justified. In this justification, we use Kolmogorov complexity to formalize the terms "simple" and "random" that these techniques use. The successful formalization of simple heuristics is a good indication that Kolmogorov complexity may be useful in formalizing more complicated heuristics as well.

Technical Report UTEP-CS-97-36, December 1997

Coincidences are Not Accidental: A Theorem

Vladik Kreinovich

Published in *Cybernetics and Systems: An International Journal*,
1999, Vol. 30, No. 5, pp. 429-440.

In this paper, we formalize and prove the statement that coincidences cannot be accidental, a statement that underlies many useful heuristics in mathematics and physics.

Our proof uses a version of Kolmogorov complexity, a technique originally developed to describe randomness and "accidentalness".

Technical Report UTEP-CS-97-35, November 1997

From Expert Words Directly to Numerical Simulations: Group-Theoretic Approach to Computing with Words in Information/Intelligent Systems

Vladik Kreinovich, Brian Penn, and Scott Starks

Published in: Lotfi A. Zadeh and Janusz
Kacprzyk (eds.), *Computing with Words in
Information/Intelligent Systems*, Springer-Verlag, Berlin, 1999,
pp. 495-517.

In many real-life situations, e.g., when making an environmental decision, it is important to be able to predict long-term consequences of different decisions. Very often, these predictions must be done in the situation where the only available information consists of expert rules, which are formulated by words from natural language. One possible way to transform these expert words into numerical simulation (leading to prediction) is to use the fuzzy control methodology. However, there is a problem with using this methodology: it invokes replacing each word by a membership function, and this replacement drastically increases the required computer space (and thus, increases the computation time), i.e., it "de-granulates" the original compact description. It is, therefore, desirable to get from the original words directly to numerical simulations, thus avoiding this de-granulation.

In seeking this direct transformation, we will use the experience of modern physics, where symmetry groups are a tool that enables to compress complicated differential equations into compact form. In our previous papers, we have shown that the symmetry group approach can be used to find optimal membership functions, optimal t-norms and t-conorms, and optimal defuzzification procedures. In this paper, we show that the same approach can also be used to combine these steps and produce an (optimal) direct transformation from words to numerical results.

Technical Report UTEP-CS-97-34, November 1997

Updated version UTEP-CS-97-34a, January 1998

Interval Approach to Non-Destructive Testing of Aerospace Structures and to Mammography

Keith Worden, Roberto Osegueda, Carlos Ferregut, Soheil Nazarian, Eulalio Rodriguez, Debra L. George, Mary J. George, Vladik Kreinovich, Olga Kosheleva, and Sergio Cabrera

In: Goetz Alefeld and
Raul A. Trejo (eds.), *Interval Computations and its Applications to
Reasoning Under Uncertainty, Knowledge Representation,
and Control Theory. Proceedings of MEXICON'98, Workshop on Interval
Computations, 4th World Congress
on Expert Systems*, Mexico City, Mexico, 1998.

Original file;

updated file (both in PostScript)

Technical Report UTEP-CS-97-33, November 1997

Updated version UTEP-CS-97-33a, January 1998

Complexity of Collective Decision Making Explained by Neural Network Universal Approximation Theorem

Raul A. Trejo and Vladik Kreinovich

In: Goetz Alefeld and
Raul A. Trejo (eds.), *Interval Computations and its Applications to
Reasoning Under Uncertainty, Knowledge Representation,
and Control Theory. Proceedings of MEXICON'98, Workshop on Interval
Computations, 4th World Congress
on Expert Systems*, Mexico City, Mexico, 1998.

Original file;

final version in
PostScript and
pdf

Technical Report UTEP-CS-97-32, November 1997

Updated version UTEP-CS-97-32a, January 1998

Strong Negation: Its Relation to Intervals and its Use in Expert Systems

Scott A. Starks, Vladik Kreinovich, Hung T. Nguyen, Hoang Phuong Nguyen, and Mirko Navara

In: Goetz Alefeld and
Raul A. Trejo (eds.), *Interval Computations and its Applications to
Reasoning Under Uncertainty, Knowledge Representation,
and Control Theory. Proceedings of MEXICON'98, Workshop on Interval
Computations, 4th World Congress
on Expert Systems*, Mexico City, Mexico.

Original file;

updated file (both in PostScript)

Technical Report UTEP-CS-97-31, November 1997

From Fuzzification and Intervalization to Anglification: A New 5D Geometric Formalism for Physics and Data Processing

Scott A. Starks and Vladik Kreinovich

Technical Report UTEP-CS-97-30, November 1997

Updated version UTEP-CS-97-30a, January 1998

OO or Not OO: When Object-Oriented is Better. Qualitative Analysis and Application to Satellite Image Processing

Ann Gates, Vladik Kreinovich, Leticia Sifuentes, and Scott Starks

In: Goetz Alefeld and
Raul A. Trejo (eds.),

Original file;

final version in
PostScript and
pdf

Technical Report UTEP-CS-97-29, November 1997

Updated version UTEP-CS-97-29a, January 1998

Where to Bisect a Box? A Theoretical Explanation of the Experimental Results

Vladik Kreinovich and R. Baker Kearfott

Original file;

updated file (both in PostScript)

Technical Report UTEP-CS-97-28, November 1997

Updated version UTEP-CS-97-28a, March 1998

Computational Complexity and Feasibility of Fuzzy Data Processing: Why Fuzzy Numbers, Which Fuzzy Numbers, Which Operations with Fuzzy Numbers

Hung T. Nguyen, Misha Koshelev, Olga Kosheleva, Vladik Kreinovich, and Radko Mesiar

Published in the *Proceedings of the International Conference on
Information Processing and Management of Uncertainty in
Knowledge-Based Systems (IPMU'98)*, Paris, France,
July 6-10, 1998, pp. 273-280.

In many real-life situations, we cannot directly measure or estimate
the desired quantity *r*. In these situations, we measure or estimate
other quantities *r1,...,rn* related to *r*, and then reconstruct
*r* from the estimates for *r_i*. This reconstruction is called
*data processing*.

Often, we only have fuzzy information about *ri*. In such cases, we
have *fuzzy data processing*. Fuzzy data means that instead of a
single number *ri*, we have several numbers that describes the fuzzy
knowledge about the corresponding quantity. Since we need to process
more numbers, the computation time for fuzzy data processing is often
much larger than for the usual non-fuzzy one. It it, therefore,
desirable to select representations and processing algorithms that
minimize this increase and thus, make fuzzy data processing feasible.

In this paper, we show that the necessity to minimize computation time explains why we use fuzzy numbers, and describes what operations we should use.

Original version; final version (both files in PostScript).

Technical Report UTEP-CS-97-27, November 1997

Updated version UTEP-CS-97-27a, March 1998

Operations with Fuzzy Numbers Explain Heuristic Methods in Image Processing

Olga Kosheleva, Vladik Kreinovich, Bernadette Bouchon-Meunier, and Radko Mesiar

Published in the *Proceedings of the International Conference on
Information Processing and Management of Uncertainty in
Knowledge-Based Systems (IPMU'98)*, Paris, France,
July 6-10, 1998, pp. 265-272.

Maximum entropy method and its heuristic generalizations are very useful in image processing. In this paper, we show that the use of fuzzy numbers enables us to naturally explain these heuristic methods.

Original version; final version (both files in PostScript).

Technical Report UTEP-CS-97-26, November 1997

Updated version UTEP-CS-97-26a, March 1998

Adding Fuzzy Integral to Fuzzy Control

Hung T. Nguyen, Vladik Kreinovich, and Richard Alo

Published in the *Proceedings of the International Conference on
Information Processing and Management of Uncertainty in
Knowledge-Based Systems (IPMU'98)*, Paris, France,
July 6-10, 1998, pp. 657-664.

Sugeno integral was invented a few decades ago as a natural fuzzy-number
analogue of the classical integral. Sugeno integral has many
interesting applications. It is reasonable to expect
that it can be used in all
application areas where classical integrals are used, and in
many such areas it is indeed useful. Surprisingly, however, it has
never been used in *fuzzy control*, although in traditional control,
classical integral is one of the main tools.

In this paper, we show that the appropriately modified Sugeno integral is indeed useful for fuzzy control: namely, it provides numerical characterization of stability and smoothness of fuzzy control strategies.

Original version; final version in PostScript, Compressed PostScript, and pdf

Technical Report UTEP-CS-97-25, November 1997

Updated version UTEP-CS-97-25a, March 1998

How to Divide a Territory? A New Simple Differential Formalism for Optimization of Set Functions

Hung T. Nguyen and Vladik Kreinovich

Published in *International Journal of Intelligent Systems*,
1999, Vol. 14, No. 3, pp. 223--251.

In many practical problems, we must optimize a *set function*,
i.e., find a set *A* for which *f(A)* is maximum,
where *f* is a function
defined on the class of sets. Such problems appear in design, in image
processing, in game theory, etc.

Most optimization problems can be solved (or at least simplified) by
using the fact that small deviations from an optimal solution can only
decrease the value of the objective function; as a result, some
derivative must be equal to 0. This approach has been successfully
used, e.g., for set functions in which the desired set *A* is a
*shape*, i.e., a smooth (or piece-wise smooth) surface. In some
real-life problems, in particular, in the territorial
division problem,
the existing methods are not directly applicable. For such
problems, we design a new simple
differential formalism for optimizing set functions.

Original version in PostScript;

Final version in PostScript,
Compressed PostScript,
and pdf.

Technical Report UTEP-CS-97-24, October 1997

Updated version UTEP-CS-97-24a, April 1998

A Variation on the Zero-One Law

Andreas Blass, Yuri Gurevich, Vladik Kreinovich, and Luc Longpre

Published in *Information Processing Letters*, 1998,
Vol. 67, pp. 29-30.

Given a decision problem *P* and a probability distribution over
binary strings,
for each *n*, draw independently an instance *xn* of
*P* of length *n*. What
is the probability that there is a polynomial time algorithm that solves all
instances *xn* of *P*? The answer is: zero or one.

Original version; final version (both files in PostScript).

Technical Report UTEP-CS-97-23, September 1997

Maximum Entropy Approach to Optimal Sensor Placement for Aerospace Non-Destructive Testing

Roberto Osegueda, Carlos Ferregut, Mary J. George, Jose M. Gutierrez, and Vladik Kreinovich

In: Gary J. Erickson, Joshua T. Rychert, and C. Ray Smith (eds.),
*Maximum Entropy and Bayesian Methods*,
Kluwer, Dordrecht, 1998, pp. 277-289.

The ideal design of an airplane should include *built-in
sensors* that are pre-blended in the perfect aerodynamic shape.
Each built-in sensor is expensive to blend in and requires continuous
maintenance and data processing, so we would like to use *as few
sensors as possible*.
The *ideal* formulation of the corresponding optimization
problem is, e.g., to minimize the average detection error for fault
locations. However, there are *two obstacles* to this ideal
formulation:

- First, this ideal formulation requires that we
*know the probabilities*of different fault locations and the probabilities of different aircraft exploitation regimes. In reality, especially for a new aircraft,*we do not have those statistics*(and for the aging aircraft, the statistics gathered from its earlier usage may not be applicable to its current state). Therefore, instead of a well-defined optimization problem, we face a problem of not so well defined problem of optimization under uncertainty. - Second, even if we know the probabilities, the corresponding
optimization problem is very
*computation-consuming*and difficult to solve.

To overcome the second obstacle, we use the *symmetry*
approach. Namely, the basic surface shapes are symmetric (with respect
to some geometric transformations such as rotations or shifts). The
MaxEnt approach results in distributions that are *invariant* with
respect to these symmetries, and therefore, the resulting *optimality
criterion* (be it the minimum of detection error, or the minimum of
fault location error, etc.) is also *invariant* with respect to these
same symmetries.
It turns out that for an arbitrary optimality criterion that satisfies the
natural symmetry conditions (crudely speaking, that the relative
quality of two sensor placements should not change if we simply shift or
rotate two placements),
the line formed by the optimally placed sensors (optimally
with respect to this criterion) can be described as an orbit of the
corresponding Lie transformation groups. As a result, *we describe
the optimal sensor placements.*

A similar problem of optimal sensor placement
is also discussed for *space structures.*

File in PostScript and in pdf.

Technical Report UTEP-CS-97-22, September 1997

Environmentally-Oriented Processing of Multi-Spectral Satellite Images: New Challenges for Bayesian Methods

Scott A. Starks and Vladik Kreinovich

In: Gary J. Erickson, Joshua T. Rychert, and C. Ray Smith (eds.),
*Maximum Entropy and Bayesian Methods*,
Kluwer, Dordrecht, 1998, pp. 271-276.

Remotely sensed images from new generation satellites present an opportunity for scientists to investigate problems in environmental and earth science which have been previously intractable. The magnitude of data that will arise from these hyperspectral instruments create the need for innovative techniques to accomplish data reduction. This paper presents an algorithm which shows promise as a tool for reducing the dimensionality of data resulting from remote sensing. The optimality criteria for the algorithm is the Bayes Risk in the reduced dimension space.

Technical Report UTEP-CS-97-21, September 1997

We Must Choose the Simplest Physical Theory: Levin-Li-Vitanyi Theorem and its Potential Physical Applications

Dirk Fox, Martin Schmidt, Misha Koshelev, Vladik Kreinovich, Luc Longpre', and Jeff Kuhn

In: Gary J. Erickson, Joshua T. Rychert, and C. Ray Smith (eds.),
*Maximum Entropy and Bayesian Methods*,
Kluwer, Dordrecht, 1998, pp. 238-251.

If several physical theories are consistent with the same experimental
data, which theory should we choose? Physicists often choose the
*simplest* theory; this principle (explicitly formulated by
Occam) is one of the basic principles of physical
reasoning. However, until recently, this principle was mainly a
*heuristic* because it uses the *informal* notion of
simplicity.

With the
explicit notion of simplicity coming from the Algorithmic Information
theory, it is possible
not only to *formalize* this principle in a way that is
consistent with its traditional usage in physics, but also to
*prove* this principle, or, to be more precise, *deduce* it from
the fundamentals of mathematical statistics as the choice
corresponding to the least informative prior measure.
Potential physical applications of this formalization (due to Li and
Vitanyi) are presented.

In particular, we show that, on the *qualitative* level,
most fundamental ideas of physics can be re-formulated as
natural steps towards choosing a theory that is the simplest
in the above *precise* sense (although on the intuitive level,
it may seem that, e.g., *classical* physics is easier than quantum
physics): in particular, we show that such ideas as
Big Bang cosmology,atomism,
uncertainty principle,
Special Relativity, quark
confinement, quantization,
symmetry, supersymmetry, etc. can all be
justified by this (Bayesian justified) preference for formalized
simplicity.

Technical Report UTEP-CS-97-20, September 1997

Which Algorithms are Feasible? MaxEnt Approach

Daniel E. Cooke, Vladik Kreinovich, and Luc Longpre

In: Gary J. Erickson, Joshua T. Rychert, and C. Ray Smith (eds.),
*Maximum Entropy and Bayesian Methods*,
Kluwer, Dordrecht, 1998, pp. 24-32.

It is well known that not all algorithms are feasible;
whether an algorithm is
feasible or not depends on how many
computational steps this algorithm requires.
The problem with the existing definitions of feasibility is that they
are rather *ad hoc*. Our goal is to use the
maximum entropy (MaxEnt) approach and get more motivated definitions.

If an algorithm is feasible, then, intuitively, we would expect the following to be true:

Thus, we can say that an algorithm is
*necessarily feasible* if T is finite for *every* probability
distribution for which L is finite, and
*possibly feasible* if T is finite for *some* probability
distribution for which L is finite.

If we consider *all* possible probability distributions,
then these definitions trivialize: every algorithm is possibly
feasible, and only linear-time algorithms are necessarily feasible.

To make the definitions less trivial, we will use the main idea of MaxEnt and consider only distributions for which the entropy is the largest possible. Since we are interested in the distributions for which the average length is finite, it is reasonable to define MaxEnt distributions as follows: we fix a number L0 and consider distributions for which the entropy is the largest among all distributions with the average length L=L0.

If, in the above definitions, we only allow such
"MaxEnt" distributions, then the above feasibility notions become
non-trivial: an algorithm is possibly feasible if it takes exponential
time (to be more precise, if and only if its average running time t(n)
over all inputs of length $n$ grows slower than some exponential
function C^n), and necessarily feasible if it is
*sub-exponential* (i.e., if t(n) grows slower than *any*
exponential function).

Technical Report UTEP-CS-97-19, September 1997

Sensor Placement for Aerospace Non-Destructive Evaluation (NDE): Optimization under Fuzzy Uncertainty

Roberto Osegueda, Carlos Ferregut, Mary J. George, Jose M. Gutierrez, and Vladik Kreinovich

Technical Report UTEP-CS-97-18, September 1997

The Challenge of Hyper-Spectral Satellite Imaging and Integer-Valued Fuzzy Sets

Maria Beltran, Vladik Kreinovich, and Scott A. Starks

Satellite images already produce huge amounts of data, which makes their processing a serious computational challenge. This problem will become even more complicated with the launch of multi-spectral Earth-imaging satellites that will increase the amount of information by at least two orders of magnitude. With such a huge amount of information, it is necessary to come up with data processing methods that are as fast as possible. In particular, we show that for fuzzy processing techniques, this leads to the necessity to use integer-valued fuzzy sets.

Technical Report UTEP-CS-97-17, September 1997

Updated version UTEP-CS-97-17a, January 1998

ALPS: A Logic for Program Synthesis (Motivated by Fuzzy Logic)

Daniel E. Cooke, Vladik Kreinovich, and Scott Starks

Published in *Proceedings of FUZZ-IEEE'98*, Anchorage, Alaska,
May 1998, Vol. 1, pp. 779-784.

One of the typical problems in engineering and scientific applications is as follows: we know the values x1,...,xn of some quantities, we are interested in the values of some other quantities y1,...,ym, and we know the relationships between xi, yj, and, maybe, some auxiliary physical quantities z1,...,zk. For example, we may know an algorithm to compute y2 from x_1, x3, and y1; we may also know an equation F(x1,x2,y1)=0 that relates these values, etc. The question is: can we compute the values of yj, and, if we can, how to do it?

At first glance, this is a problem of numerical mathematics, and, indeed, there exist numerical methods to solve it. These methods use the equations from the very beginning; these equations can be very complicated and therefore, processing them can be very time-consuming. E. Tyugu proposed a two-stage ("lazy computations") approach to solving these problems, and implemented it in a system PRIZ. First, we consider only which quantities are computable from which. Based on this information only (and not using the specific algorithms or relationships of the type F(x1,x2,y1)=0) we find out, whether it is possible to compute yj, and, if it is possible, what steps should we follow. For example, in the above example we first compute y1 from x1 and x2 by solving the equation F(x1,x2,y1)=0, and then use the resulting value y1 and the known value x3 to compute y2. On the second step we actually perform these computations.

G. Mints showed that the problems of PRIZ's first step can be reformulated in logical terms. He also showed that using the related logical algorithms instead of the original PRIZ's ones can improve the system. However, there are cases when this approach does not work: for example, when we have two equations with two unknowns. No known logic is applicable to this case.

We propose a new logic (motivated by fuzzy logic) that is specially tailored for these problems, show its decidability, analyze its properties, and propose a special resolution rule for it. This new logic allows us to use a two-stage ("lazy computations") approach in the most general case.

The usefulness of a non-classical (fuzzy-type) logic in program design
shows that its non-classical features are extremely important not only
in the *soft* area of representing human reasoning, but also in
the *hard* parts of computer science.

Original file in PostScript;

updated version in
pdf and in
PostScript.

Technical Report UTEP-CS-97-16, September 1997

Updated version UTEP-CS-97-16a, January 1998

A Modification of Sugeno Integral Describes Stability and Smoothness of Fuzzy Control

Hung T. Nguyen and Vladik Kreinovich

Published in *Proceedings of FUZZ-IEEE'98*, Anchorage, Alaska,
May 1998, Vol. 1, pp. 360-365.

Sugeno integral was invented a few decades ago as a natural fuzzy analogue of the classical integral. Sugeno integral has many interesting applications. It is reasonable to expect that it can be used in all application areas where classical integrals are used, and in many such areas it is indeed useful. Surprisingly, however, it has never been used in fuzzy control, although in traditional control, classical integral is one of the main tools.

In this paper, we show that the appropriately modified Sugeno integral is indeed useful for fuzzy control: namely, it provides numerical characterization of stability and smoothness of fuzzy control strategies.

Original file and updated version.

Technical Report UTEP-CS-97-15, September 1997

Identification and Classification of Inconsistency in Relationship to Software Maintenance

Daniel Cooke, Luqi, and Vladik Kreinovich

This paper provides an overview of the relationship between recent work in logic programming and recent developments in software engineering. The relationship to software engineering is more specifically concerned with how formal specifications can be used to explain and represent the basis of software maintenance and evolution. Some of the results reviewed here have appeared in our previous papers. These previous results are summarized, extended, and made more general in this paper.

Technical Report UTEP-CS-97-14, September 1997

Computational Geometry and Artifical Neural Networks: A Hybrid Approach to Optimal Sensor Placement for Aerospace NDE

Roberto Osegueda, Carlos Ferregut, Mary J. George, Jose M. Gutierrez, and Vladik Kreinovich

Published in: Carlos Ferregut, Roberto Osegueda, and Alina Nunez (eds.),
*Proceedings of the International
Workshop on Intelligent NDE Sciences for Aging and Futuristic
Aircraft*, El Paso, TX,
September 30-October 2, 1997, pp. 59-71.

The ideal design of an airplane should include built-in sensors that are pre-blended in the perfect aerodynamic shape. Each built-in sensor is expensive to blend in and requires continuous maintenance and data processing, so we would like to use as few sensors as possible. The ideal formulation of the corresponding optimization problem is, e.g., to minimize the average detection error for fault locations. However, there are two obstacles to this ideal formulation:

- First, this ideal formulation requires that we know the probabilities of different fault locations etc., and there are usually not enough statistics to determine these probabilities.
- Second, even for a known distribution, finding the best locations is a very difficult computational problem.

Technical Report UTEP-CS-97-13, August 1997

Multi-Resolution Data Processing: It is Necessary, it is Possible, it is Fundamental

Scott A. Starks, Vladik Kreinovich, and Alex Meystel

Proceedings of the International Conference on
Intelligent Systems and Semiotics (ISAS'97), National Institute of
Standards and Technology Publ., Gaithersburg, MD, 1997, pp. 145-150;
also in
*Heuristics: The Journal of Intelligent Technologies*, 1997/98,
Vol. 10, No. 2, pp. 51-58.

Experience shows that
many data processing problems are difficult to solve, and some of
these problems have even been *proven* to be
computationally intractable. Human experts successfully solve many
such problems by using a hierarchical, *multi-resolution*
approach. These multi-resolution methods are, in several cases,
provably optimal. However, due to the computational intractability of
the problem itself, the multi-resolution
approach can only work if the systems that we
are analyzing are themselves hierarchical. We show that, first, due to
(inevitable) measurement inaccuracies, an arbitrary input data is
*consistent* with the hierarchical model, and second, that in many
cases, the actual physical world is indeed fundamentally hierarchical.

Since traditional statistical methods have been designed primarily for
non-hierarchical models, their direct application to multi-resolution
data processing can lead to biased estimates. On a simple example, we
show how these methods can be corrected to avoid this
bias. Surprisingly, the analysis of this problem leads to new
unexpected *symmetries*.

Technical Report UTEP-CS-97-12, August 1997

Towards Computers of Generation Omega - Non-Equilibrium Thermodynamics, Granularity, and Acausal Processes: A Brief Survey

Misha Koshelev and Vladik Kreinovich

Proceedings of the International Conference on Intelligent Systems and Semiotics (ISAS'97), National Institute of Standards and Technology Publ., Gaithersburg, MD, 1997, pp. 383-388.

Nowadays, we are using mainly computer of fourth generation, and we are designing fifth-generation computers. It is reasonable to ask: what is the perspective? What will the computers of generation omega look like?

- As the speed of data processing increases, we face a natural
limitation of
*causality*, according to which the speed of all processes is limited by the speed of light. - Lately, a new area of
*acausal*(causality violating) processes has entered mainstream physics.

- how non-equilibrium thermodynamics makes these processes consistent,
- how these processes can be used in computations, and
- how the very possibility of these
processes lead to the
*granularity*of the physical world.

Technical Report UTEP-CS-97-11, August 1997

Non-Equilibrium Thermodynamics Explains Semiotic Shapes: Applications to Astronomy and to Non-Destructive Testing of Aerospace Systems

Roberto Osegueda, Carlos Ferregut, Mary J. George, Jose M. Gutierrez, and Vladik Kreinovich

Proceedings of the International Conference on Intelligent Systems and Semiotics (ISAS'97), National Institute of Standards and Technology Publ., Gaithersburg, MD, 1997, pp. 378-382.

Celestial bodies such as galaxies, stellar clusters, planetary systems, etc., have different geometric shapes (e.g., galaxies can be spiral or circular, etc.). Usually, complicated physical theories are used to explain these shapes; for example, several dozen different theories explain why many galaxies are of spiral shape. Some rare shapes are still difficult to explain.

It turns out that
to explain these "astroshapes", we do
not need to know the details of *physical* equations:
practically all the shapes of
celestial bodies can be explained by simple *geometric*
invariance properties. This fact explains, e.g., why so many different
physical theories lead to the same spiral galaxy shapes.

This same physical idea is used to solve a different problem: the optimal sensor placement for non-destructive testing of aerospace systems.

Technical Report UTEP-CS-97-10, August 1997

How to Make World Wide Web Sites Faster and Easier to Use

Misha Koshelev

Working Notes of the AAAI Symposium on Frontiers in Soft Computing and Decision Systems, Boston, MA, November 8-10, 1997.

We propose a new idea of organizing Web sites so that the Web will be easier and faster to use.

File in PostScript file and in pdf.

Technical Report UTEP-CS-97-9, August 1997

Updated version UTEP-97-9a, November 1997

Spinal Cord Stimulation for Chronic Pain Management: Towards an Expert System

Kenneth M. Alo, Richard Alo, Andre de Korvin, and Vladik Kreinovich

Published in the *Proceedings of the 4th World Congress on Expert
Systems*, March 16-20, 1998, Mexico City, Vol. 1, pp. 156-164.

Chronic pain is a serious health problem affecting millions of people worldwide. Currently, spinal cord simulation is one of the most effective methods of easing the chronic pain. For most patients, a careful selection of weak electric currents enables to drastically decrease the pain level. The first devices offered only a few possible regimes, and it was possible to choose an appropriate regime simply by exhaustive search. Continuous engineering progress leads to more and more flexible devices that offer a wide variety of millions of possible simulation regimes. With this variety, it is no longer possible to test all of them on each patient, we need an intelligent method of choosing an appropriate simulation regime.

In this paper, we describe the design of an expert system for choosing appropriate simulations. We hope that our paper will be understandable both for the computer science readers interested in medical applications, and for medical researchers interested in using computers for pain relief.

First version, final version in PostScript and pdf

Technical Report UTEP-CS-97-8, June 1997

Optimal Approximation of Quadratic Interval Functions

Misha Koshelev, Luc Longpr'e, and Patrick Taillibert

Published in *Reliable Computing,* 1998, Vol. 4, No. 4,
pp. 351-360.

In this paper, we analyze the problem of the optimal (narrowest) approximation of a quadratic interval function y(x1,...,xn) (i.e., an interval function for which both endpoints are quadratic) by a linear interval function. We show that in general, this problem is computationally intractable (NP-hard). For a practically important 1D case (n=1), we present an efficient algorithm.

File in LaTeX.

Technical Report UTEP-CS-97-7, May 1997

Why Intervals? Because If We Allow Other Sets, Tractable Problems Become Intractable

Monica Nogueira and Amarendra Nandigam

We show that if we add at least one non-interval set to the family of all intervals, then some reasonable interval computation problems that were previously computationally feasible become intractable. This result provides one more motivation for using the interval family: because if we increase this family, we lose computability.

File in LaTeX.

Technical Report UTEP-CS-97-6, May 1997

Methodology of Fuzzy Control: An Introduction

Hung T. Nguyen and Vladik Kreinovich

Published in: H. T. Nguyen and M. Sugeno (eds.),
*Fuzzy Systems: Fuzzy Modeling and
Control*, Kluwer Academic, 1998, pp. 19-62.

Fuzzy control is a methodology that translates the experience of human operators, experience that is usually described by using words of natural language like "small", "a little bit", etc., into a precise control strategy. This chapter gives an introduction to this methodology.

File in LaTeX, uses crckapb.sty.

Technical Report UTEP-CS-97-5, April 1997

Technical Report UTEP-CS-97-5a, August 1997

Soft Computing: Frontiers?

A Case Study of Hyper-Spectral Satellite Imaging

Scott Starks and Vladik Kreinovich

Preliminary version appeared in
*Working Notes of the AAAI Symposium on Frontiers in Soft Computing and
Decision Systems*, Boston, MA, November 8-10, 1997, pp. 66-71;

final version is in Larry Medsker (ed.), *Frontiers in Soft Computing
and Decision Systems*, AAAI Press (Publication No. FS-97-04), 1997,
pp. 52-57.

Soft computing methods such as fuzzy control, neural networks, etc., often require lots of computations even for small amounts of data. It is, therefore, sometimes believed that for larger amounts of data, the required amount of computations will be so large that we will reach the frontiers of soft computing.

In this paper, we show, on the example of hyper-spectral satellite imaging, that this belief is often too pessimistic. We should not be afraid to use (or at least to try to use) soft computing methods even for large amounts of data.

Technical Report UTEP-CS-97-4, April 1997

Technical Report UTEP-CS-97-4a, August 1997

Soft Computing Explains Heuristic Numerical Methods in Data Processing and in Logic Programming

Hung T. Nguyen, Vladik Kreinovich, and Bernadette Bouchon-Meuiner

Preliminary version appeared in
*Working Notes of the AAAI Symposium on Frontiers in Soft Computing and
Decision Systems*, Boston, MA, November 8-10, 1997, pp. 40-45;

final version is in Larry Medsker (ed.), *Frontiers in Soft Computing
and Decision Systems*, AAAI Press (Publication No. FS-97-04), 1997,
pp. 30-35.

We show that fuzzy logic and other soft computing approaches explain and justify heuristic numerical methods used in data processing and in logic programming, in particular, M-methods in robust statistics, regularization techniques, metric fixed point theorems, etc.

Technical Report UTEP-CS-97-3, February 1997

Updated version UTEP-CS-97-3a, September 1997

Strict Achimedean t-Norms and t-Conorms as Universal Approximators

Hung T. Nguyen, Vladik Kreinovich and Piotr Wojciechowski

Published in *International Journal of Approximate Reasoning*,
1998, Vol. 18, No. 3-4, pp. 239-249.

In knowledge representation, when we have to use logical connectives,
various *continuous* t-norms and t-conorms are used. In
this paper, we show that every continuous t-norm and t-conorm can be
approximated, to an arbitrary degree of accuracy, by a strict
Archimedean t-norm (t-conorm).

First version in LaTeX, uses
IJA.sty;

Final version in PostScript.

Technical Report UTEP-CS-97-2, January 1997

Computers of Generation Omega

Vladik Kreinovich

To make computers faster, we must take smaller and smaller processing elements, and to analyze these elements, we need to take into consideration more and more subtle quantum effects. A. Stern has shown a natural way to use quantum field theory for computing. It turns out that if we take into consideration all quantum effects, then this idea leads to an even faster performance.

Technical Report UTEP-CS-97-1, January 1997

Multi-Criteria Optimization - an Important Foundation of Fuzzy System Design

Hung T. Nguyen and Vladik Kreinovich

Published in: L. Reznik, V. Dimitrov, and J. Kacprzyk,
*Fuzzy system design: social and engineering applications*,
Physica Verlag, Heidelberg, 1998, pp. 24-35.

In many real-life design situations, there are several different criteria that we want to optimize, and these criteria are often in conflict with each other. Traditionally, such multi-criteria optimization situations are handled in an ad hoc manner, when different conflicting criteria are artificially combined into a single combination objective that is then optimized. The use of unnatural ad hoc tools is clearly not the best way of describing a very natural aspect of human reasoning. Fuzzy logic describes a much more natural way of handling multi-criterion optimization problems: when we cannot maximize each of the original conflicting criteria 100%, we optimize each to a certain extent.

Several methods have been proposed in fuzzy logic to handle multi-criteria optimization. These methods, however, still use some ad hoc ideas. In this paper, we show that some approaches to multi-objective optimization can be justified based on the fuzzy logic only and do not require any extra ad hoc tools.

File in LaTeX, Compressed PostScript, and pdf