University of Texas at El Paso
Computer Science Department
Abstracts of 2008 Reports


Technical Report UTEP-CS-08-45, December 2008
Computing with Tensors: Potential Applications of Physics-Motivated Mathematics to Computer Science
Martine Ceberio and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2010, Vol. 4, No. 4, pp. 257-260.

In this paper, we explain what are tensors and how tensors can help in computing.

File in pdf


Technical Report UTEP-CS-08-44, December 2008
Mathematical Justification of Spectral/Covariance Techniques: On the Example of Arc Detection
Jan Beck, David Nemir, and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2009, Vol. 3, No. 22, pp. 1081-1089.

Detecting arcing faults is an important but difficult-to-solve practical problem. Many existing methods of arc detection are based upon acquiring a signal that is proportional to current and then making an analysis of the signal's power spectrum (or, equivalently, its covariance function). Since the power spectrum, i.e., the absolute values of the Fourier transform, carries only partial information about the signal, a natural question is: why should we restrict ourselves to the use of this partial information? A related question is caused by the fact that even the most efficient methods still miss some arcing faults and/or lead to false detection; what methods should we use to improve the quality of arc detection?

Our analysis is much more general than the arc detection problem and can be used to justify and select detection methods in other applied problems as well.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-43, December 2008
Estimating Risk under Interval Uncertainty: Sequential and Parallel Algorithms
Vladik Kreinovich, Hung T. Nguyen, and Songsak Sriboonchita

Short version published in Proceedings of the 2nd Annual Econometric Conference, Chiang Mai, Thailand, January 5-6, 2009, pp. 47-60; full paper published in International Journal of Intelligent Technologies and Applied Statistics, 2010, Vol. 3, No. 1, pp. 57-70.

In traditional econometrics, the quality of an individual investment -- and of the investment portfolio -- is characterized by its expected return and its risk (variance). For an individual investment or portfolio, we can estimate the future expected return and a future risk by tracing the returns x1, ..., xn of this investment (and/or similar investments) over the past years, and computing the statistical characteristics based on these returns. The return (per unit investment) is defined as the selling of the corresponding financial instrument at the ends of, e.g., a one-year period, divided by the buying price of this instrument at the beginning of this period. It is usually assumed that we know the exact return values x1, ..., xn. In practice, however, both the selling and the buying prices unpredictably fluctuate from day to day -- and even within a single day. These minute-by-minute fluctuations are rarely recorded; what we usually have recorded is the daily range of prices. As a result, we can only find the range of possible values of the return xi. In this case, different possible values of xi lead, in general, to different values of the expected return E and of the risk V. In such situations, we are interested in producing the intervals of possible values of E and V.

In the paper, we describe algorithms for producing such interval estimates. The corresponding sequential algorithms, however, are reasonably complex and time-consuming. In financial applications, it is often very important to produce the result as fast as possible. One way to speed up computations is to perform these algorithms in parallel on several processors, and thus, to speed up computations. In this paper, we show how the algorithms for estimating variance under interval uncertainty can be parallelized.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-42, November 2008
On chromatic numbers of space-times: open problems
Olga Kosheleva and Vladik Kreinovich

Published in Geombinatorics, 2009, Vol. 19, No. 1, pp. 14-17.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-41, November 2008
A Paradox of Altruism: How Caring about Future Generations Can Result in Poverty for Everyone (Game-Theoretic Analysis)
Tanja Magoc and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2009, Vol. 3, No. 22, pp. 1091-1096.

Political and social activists are rightfully concerned about future generations: whenever a country borrows money, or an environmental situation worsens, this means, in effect, that we impose an additional burdens on future generations. There is clearly a conflict between the present generation's actions and interests and the welfare of the future generations. There exists a mathematical toolbox that provides solutions to many well-defined conflict situations: namely, the toolbox of game theory. It therefore seems reasonable to apply game theory techniques to the conflict between the generations. In this paper, we show that we need to be very cautious about this application, because reasonable game theoretic techniques such as Nash bargaining solution can lead to disastrous "solutions" such as universal poverty. In other words, seemingly reasonable altruism can lead to solutions which are as disastrous as extreme selfishness.

The development of appropriate techniques -- techniques which would lead to a reasonable resolution of this inter-generational conflict -- thus remains an important open problem.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-40, November 2008
Viability of Travel-time Sensitivity Testing for Estimating Uncertainty of Tomographic Velocity Models: A Case Study
Matthew G. Averill, Kate C. Miller, Vladik Kreinovich, and Aaron A. Velasco

Seismic tomography is now a common approach to estimating velocity structure of the Earth, regardless of whether the data sources are earthquake recordings or controlled sources such as explosions, airguns or Vibroseis. Seismic tomography is convenient to implement because it requires little to no a priori knowledge of Earth structure and is much less time consuming than forward modeling schemes. Despite its convenience, the method still lacks satisfactory quantitative assessments of model reliability. Here we explore the viability of applying travel-time sensitivity testing that uses a modified Cauchy distribution as its statistical foundation to assessing the uncertainty in velocity models produced with seismic tomography. Using a crustal refraction survey as a test data set, we find that this approach produces a more realistic estimate of the velocity uncertainty than does either a resampling approach or travel-time sensitivity testing that uses a Gaussian distribution as its statistical foundation. The velocity uncertainty estimates provide an important complement to other estimates of model reliability including checkerboard tests.

File in pdf (caution: the size of this file is 15.65 MBytes)


Technical Report UTEP-CS-08-39, November 2008
A New Simplified Derivation of Nash Bargaining Solution
Tanja Magoc and Vladik Kreinovich

Published in Applied Mathematical Sciences, 2009, Vol. 3, No. 22, pp. 1097-1101.

In the 1950s, the Nobel Prize winner John F. Nash has shown that under certain conditions, the best solution to the bargaining problem is when the product of the (increase in) utilities is the largest. Nash's derivation assumed that we are looking for strategies that assign a single situation to each bargaining situation. In this paper, we propose a simplified derivation of Nash bargaining solution that does not requires this assumption.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-38, November 2008
An Aspect-Based Approach to Checking Design Constraints at Run-time
Yoonsik Cheon, Carmen Avila, Steve Roach, Cuahtemoc Munoz, Neith Estrada, Valeria Fierro, and Jessica Romo

Design decisions and constraints of a software system can be specified precisely using a formal notation such as the Object Constraint Language (OCL). However, they are not executable, and assuring the conformance of an implementation to its design is hard. The inability of expressing design constraints in an implementation and checking them at runtime invites, among others, the problem of design drift and corrosion. We propose runtime checks as a solution to mitigate this problem. The key idea of our approach is to translate design constraints written in a formal notation such as OCL into aspects that, when applied to a particular implementation, check the constraints at run-time. Our approach enables runtime verification of design-implementation conformance by detecting design corrosion. The approach is modular and plug-and-playable; the constraint checking logic is completely separated from the implementation modules which are oblivious of the former. We believe that a significant portion of constraints translation can be automated.

File in PDF and in Compressed Postscript


Technical Report UTEP-CS-08-37, November 2008
Updated version UTEP-CS-08-37a, December 2008
Intelligence Techniques Are Needed to Further Enhance the Advantage of Groups with Diversity in Problem Solving
Oscar Castillo, Patricia Melin, J. Esteban Gamez, Vladik Kreinovich, and Olga Kosheleva

Published in the Proceedings of the 2009 IEEE Workshop on Hybrid Intelligent Models and Applications HIMA'2009, Nashville, Tennessee, March 30-April 2, 2009, pp. 48-55.

In practice, there are many examples when the diversity in a group enhances the group's ability to solve problems -- and thus, leads to more efficient groups, firms, schools, etc. Several papers, starting with the pioneering research by Scott E. Page from the University of Michigan at Ann Arbor, provide a theoretical justification for this known empirical phenomenon. However, when the general advise of increasing diversity is transformed into simple-to-follow algorithmic rules (like quotas), the result is not always successful. In this paper, we prove that the problem of designing the most efficient group is computationally difficult (NP-hard). Thus, in general, it is not possible to come up with simple algorithmic rules for designing such groups: to design optimal groups, we need to combine standard optimization techniques with intelligent techniques that use expert knowledge.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-36, October 2008
Current Financial Crisis and Inadequate Uncertainty Processing: A Comment
Tanja Magoc and Vladik Kreinovich

File in pdf


Technical Report UTEP-CS-08-35, September 2008
Updated version UTEP-CS-08-35a, December 2008
Verified methods for computing Pareto sets: general algorithmic analysis
Boglarka G.-Toth and Vladik Kreinovich

Published in International Journal of Applied Mathematics and Computer Science AMCS, 2009, Vol. 19, No. 3, pp. 369-380.

In many engineering problems, we face multi-objective optimization, with several objective functions f1,...,fn. We want to provide the user with the Pareto set -- set of all possible solutions x which cannot be improved in all categories (i.e., for which fj(x')>=f_j(x) for all j and fj(x')>fj(x) for some j is impossible). The user should be able to select an appropriate trade-off between, say, cost and durability. We extend the general results about the (verified) algorithmic computability of maxima locations to show that Pareto sets can also computed.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-34, September 2008
Additional Information about American and Arab Perceptions of an Arabic Turn-Taking Cue
Nigel Ward and Yaffa Al Bayyari

This technical report is a supplement to "American and Arab perceptions of an Arabic turn-taking cue", a paper submitted to the Journal of Cross-Cultural Psychology. It provides additional details, discussion, figures, tables, and references relating to the main finding, that English speakers tend to misinterpret the prosodic pattern used in Arabic to cue back-channel responses, perceiving it as an expression of negative affect. It also describes an experimental demonstration that being able to detect and respond to this prosodic pattern in dialog can increase native-speaker perceptions of the social effectiveness of learners.

File in pdf


Technical Report UTEP-CS-08-33a, June 2010
Towards a More Adequate Defuzzification of Interval-Valued Fuzzy Sets
Vladik Kreinovich, Van Nam Huynh, and Yohiteru Nakamori

To appear in the Proceedings of the 7th International Conference on Modeling Decisions for Artificial Intelligence MDAI'2010, Perpignan, French Catalonia, France, October 27-29, 2010

It is known that interval-valued fuzzy sets [m(x)] provide a more adequate description of expert uncertainty than the more traditional "type-1" (number-valued) fuzzy techniques. Specifically, an interval-valued fuzzy set can be viewed as a class of possible fuzzy sets m(x) from [m(x)]. In this case, as a result of defuzzification, it is natural to return the range [u] of all possible values u(m) that can be obtained by defuzzifying membership functions m(x) from this class.

In practice, it is reasonable to restrict ourselves only to fuzzy numbers m(x), i.e., to "unimodal" fuzzy sets. Under this restriction, in general, we get a narrower range [a] of possible values of u(m). In this paper, we describe a feasible algorithm for computing the new range [a].

File in pdf


Technical Report UTEP-CS-08-33, August 2008
Towards a More Adequate Use of Interval-Valued Fuzzy Techniques in Intelligent Control: A Fuzzy Analogue of Unimodality
Van Nam Huynh and Vladik Kreinovich

Published in the Proceedings of the International Workshop on Soft Computing for Knowledge Technology, in conjunction with The Tenth Pacific Rim International Conference on Artificial Intelligence PRICAI'08, Hanoi, Vietnam, December 15-19, 2008, pp. 80-89.

It is known that interval-valued fuzzy sets provide a more adequate description of expert uncertainty than the more traditional "type-1" (number-valued) fuzzy techniques. In the current approaches for using interval-valued fuzzy techniques, it is usually assumed that all fuzzy sets m(x) from the interval [l(x),u(x)] are possible. In this paper, we show that it is reasonable to restrict ourselves only to fuzzy numbers m(x), i.e., "unimodal" fuzzy sets. We also describe feasible algorithms for implementing thus modified intelligent control.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-32, August 2008
Extracting Trust Network Information from Scientific Web Portals
Alejandro Castaneda and Paulo Pinheiro da Silva

An increased exchange of (scientific) information across organizations and disciplines is one of the long-term goals of the semantic web. In any such exchange of information, it is not difficult to identify one or more (scientific) communities responsible for the measurement, gathering and processing of scientific information. More challenging, however, is to understand the trust relations between members of these communities, whether the members are organizations or people. With a better understanding of trust relations, one may be able to compute trust recommendations for scientific information exchange, increasing in this way the acceptance of information by scientists. In this paper, we present CI-Learner, which is a systematic approach for extracting trust-related meta-information from scientific portals and related web sites. CI-Learner meta-information is organized as trust networks based on people, organizations, publications, and trust relations derived from publication co-authorship. Participation in a given trust network is restricted to organizations and people as identified by the CI-Learner information extraction process. The paper reports on the usefulness of the extracted trust network and related ranking as identified in a user study with subjects who are experts in the field of concern.

File in pdf


Technical Report UTEP-CS-08-31, August 2008
Updated version UTEP-CS-08-31a, September 2008
Maximum Entropy in Support of Semantically Annotated Datasets
Paulo Pinheiro da Silva, Vladik Kreinovich, and Christian Servin

To appear in the Proceedings of the 4th International Workshop on Uncertainty Reasoning for the Semantic Web URSW'2008, Karlsruhe, Germany, October 26, 2008.

One of the important problems of semantic web is checking whether two datasets describe the same quantity. The existing solution to this problem is to use these datasets' ontologies to deduce that these datasets indeed represent the same quantity. However, even when ontologies seem to confirm the identify of the two corresponding quantities, it is still possible that in reality, we deal with somewhat different quantities. A natural way to check the identity is to compare the numerical values of the measurement results: if they are close (within measurement errors), then most probably we deal with the same quantity, else we most probably deal with different ones. In this paper, we show how to perform this checking.

Original file in pdf and in Compressed Postscript;
updated file in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-30, July 2008
Updated version UTEP-CS-08-30a, September 2008
TrustMap: Towards Trust Recommendations for Maps
Paulo Pinheiro da Silva, Nicholas Del Rio, Vladik Kreinovich, and Alejandro Castaneda

To appear in Proceedings of the Terra Cognita 2008 Workshop, Karlsruhe, Germany, October 26, 2008.

The web is a rich environment for exchanging spatial information. When spatial information is shared in the form of images, i.e., maps, these images almost never come with meta-information about how they were generated. This kind of meta-information is often called knowledge provenance. Access to knowledge provenance may facilitate users to make informed decisions about the quality of maps. In this paper, we propose TrustMap, a new approach for enhancing maps with trust recommendations. For a given map, TrustMap can generate recommendations from the knowledge provenance and a network of trust relations between sources of information used to derive the map. The paper describes a TrustMap implementation as well as an on-going qualitative evaluation of the current implementation.

Original file in pdf
updated file in pdf


Technical Report UTEP-CS-08-29, July 2008
Testing the Value of a Time-based Language Model for Speech Recognition
Nisha Kiran and Nigel G. Ward

Speech recognition relies on the language model in order to decode an utterance, and in general a better language model improves the performance of a speech recognizer. We have recently found that a time-based language model can improve on a standard trigram language model in terms of perplexity. This technical report presents the evaluation of this new language model in the context of speech recognition. First, a basic speech recognizer was built using the HTK tool. Then the recognizer was run using the standard language model and using the time-based one. On a testset of 39,147 words from the Switchboard corpus, there was a slight improvement, with the percentage of words correctly recognized going from 11.31% to 11.40%.

File in pdf


Technical Report UTEP-CS-08-28, June 2008
Revised version UTEP-CS-08-28a, August 2008
Choquet Integrals and OWA Criteria as a Natural (and Optimal) Next Step After Linear Aggregation: A New General Justification
Francois Modave, Martine Ceberio, and Vladik Kreinovich

To appear in Alexander Gelbukh and Eduardo F. Morales (eds.), Proceedings of the 7th Mexican International Conference on Artificial Intelligence MICAI'08, Mexico City, Mexico, October 27-31, 2008, Springer Lecture Notes on Artificial Intelligence, 2008, Vol. 5317, pp. 741-753.

In multi-criteria decision making, it is necessary to aggregate (combine) utility values corresponding to several criteria (parameters). The simplest way to combine these values is to use linear aggregation. In many practical situations, however, linear aggregation does not fully adequately describe the actual decision making process, so non-linear aggregation is needed.

From the purely mathematical viewpoint, the next natural step after linear functions is the use of quadratic functions. However, in decision making, a different type of non-linearities are usually more adequate than quadratic ones: non-linearities like OWA or Choquet integral that use min and max in addition to linear combinations. In this paper, we explain the empirically observed advantage of such aggregation operations.

Original file in pdf and in Compressed Postscript
revised version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-27, June 2008
Updated version UTEP-CS-08-27a, August 2008
Computing Degrees of Subsethood and Similarity for Interval-Valued Fuzzy Sets: Fast Algorithms
Hung T. Nguyen and Vladik Kreinovich

Published in Proceedings of the 9th International Conference on Intelligent Technologies InTech'08, Samui, Thailand, October 7-9, 2008, pp. 47-55.

We propose fast algorithms for computing degrees of subsethood and similarity for interval-valued fuzzy sets.

Original file in pdf and in Compressed Postscript;
updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-26, June 2008
Updated version UTEP-CS-08-26a, August 2008
Final version UTEP-CS-08-26b, August 2008
Hypothesis Testing with Interval Data: Case of Regulatory Constraints
Sa-aat Niwitpong, Hung T. Nguyen, Ingo Neumann, and Vladik Kreinovich

A short version published in Proceedings of the 9th International Conference on Intelligent Technologies InTech'08, Samui, Thailand, October 7-9, 2008, pp. 11-20; the full paper will appear in International Journal of Intelligent Technologies and Applied Statistics, 2008, Vol. 1, No. 2, pp. 19-41.

In many practical situations, there exist regulatory thresholds: e.g., a concentration of certain chemicals in the car exhaust cannot exceed a certain level, etc. If we know the exact value of the corresponding quantity, then we can immediately tell whether, e.g., a car design resulting in this value is acceptable (below the threshold) or not acceptable (above the threshold). In practice, however, the value of the desired quantity comes from measurements or from expert estimates; in both cases, the resulting estimates are not 100% accurate. It is therefore necessary to make an accept/reject decision based on this estimate, i.e., based on the approximate value of the quantity.

A similar situation occurs when instead of the exact threshold, experts provide us with an imprecise threshold like "about 140". In this paper, we describe how to make accept/reject decisions under measurement or expert uncertainty in case of regulatory and expert-based thresholds, where the threshold does not come from a detailed statistical analysis.

Original file in pdf and in Compressed Postscript;
updated version in pdf and in Compressed Postscript
final version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-25, April 2008
Updated version UTEP-CS-08-25b, September 2008
Computational Methods for Investment Portfolio: the Use of Fuzzy Measures and Constraint Programming for Risk Management
Tanja Magoc, Francois Modave, Martine Ceberio, and Vladik Kreinovich

Published in Aboul-Ella Hassanien and Ajith Abraham (ed.), Foundations of Computational Intelligence, Springer-Verlag, 2009, Vol. 2, pp. 133-173.

Computational intelligence techniques are very useful tools for solving problems that involve understanding, modeling, and analysis of large data sets. One of the numerous fields where computational intelligence has found an extremely important role is finance. More precisely, optimization issues of one's financial investments, to guarantee a given return, at a minimal risk, have been solved using intelligent techniques such as genetic algorithm, rule-based expert system, neural network, and support-vector machine. Even though these methods provide good and usually fast approximation of the best investment strategy, they suffer some common drawbacks including the neglect of the dependence among among criteria characterizing investment assets (i.e. return, risk, etc.), and the assumption that all available data are precise and certain. To face these weaknesses, we propose a novel approach involving utility-based multi-criteria decision making setting and fuzzy integration over intervals.

Original file in pdf
updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-24, April 2008
Opportunistic Checkpoint Intervals to Improve System Performance
Sarala Arunagiri, John T. Daly, Patricia J. Teller, Seetharami Seelam, Ron A. Oldfield, Maria Ruiz Varela, and Rolf Riesen

The massive scale of current and next-generation massively parallel processing (MPP) systems presents significant challenges related to fault tolerance. For applications that perform periodic checkpoints, the choice of the checkpoint interval, the period between checkpoints, can have a significant impact on the execution time of the application. Finding the optimal checkpoint interval that minimizes the wall clock execution time, has been a subject of research over the last decade. In an environment where there are concurrent applications competing for access to the network and storage resources, in addition to application execution times, contention at these shared resources need to be factored into the process of choosing checkpoint intervals. In this paper, we perform analytical modeling of a complementary performance metric - the aggregate number of checkpoint I/O operations. We then show the existence and characterize a range of checkpoint intervals which have a potential of improving application and system performance.

File in pdf


Technical Report UTEP-CS-08-23, April 2008
Fast Algorithms for Uncertainty Propagation, and Their Applications to Structural Integrity
Andrzej Pownuk, Jakub Cerveny, and Jerald J. Brady

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

In many practical situations, we need to know how uncertainty propagates through data processing algorithms, i.e., how the uncertainty in the inputs affects the results of data processing. This problem is important for all types of uncertainty: probabilistic, interval, and fuzzy. From the computational viewpoint, however, this problem is much more complex for interval and fuzzy uncertainty. Therefore, for these types of uncertainty, it is desirable to design faster algorithms.

In this paper, we describe faster algorithms for two practically important situations:

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-22, April 2008
Sloth-NFS and the Possibility of using Fuzzy Control to Optimize Cache Management
Ryan C. Spring and Eric A. Freudenthal

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

Modern software systems are complex and increasingly vulnerable to malicious attack. In order to apply bug fixes and protect against security weaknesses, workstation administrators must continuously patch operating system and application program installations. To simplify this process, administrators generally configure systems into clusters with common installations of operating systems and application programs in a manner that patches and other routine maintenance can be applied en masse. Two widely used approaches offer complementary advantages: Installations and updates are particularly convenient when all data including operating system and programs is served by traditional centralized network-connected file servers (networked file systems) because administrators need only maintain a single image that is distributed online to all workstations. In contrast, operating systems and application programs can be installed onto disk drives within each workstation. This mass replication of common data provides high aggregate bandwidth through parallelism since each disk operates independently. However, in contrast to the network file system approach, it is substantially more difficult to keep a large network of systems up to date when each system has an autonomous installation. Through the aggressive use of cooperative file caching, we expect that sloth- NFS is will provide the advantages of both approaches. In this paper, we discuss the administrative problems with current distribution methods for program and operating system installations, analyze design decisions necessary in Sloth-NFS and present results from an initial simulator experiment.

File in pdf


Technical Report UTEP-CS-08-21, April 2008
Program Synthesis from Workflow-Driven Ontologies
Leonardo Salayandia, Steve Roach, and Ann Q. Gates

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

An approach that results in the development of Workflow-Driven Ontologies (WDO) (called the WDO approach) allows domain scientists to capture process knowledge in the form of concepts as well as relations between concepts. Program synthesis techniques can be employed to generate algorithmic solutions by transforming the process knowledge expressed in the WDO as concepts and relations to variables and functions and computing unknown variables from known ones based on the process knowledge documented by the domain scientist. Furthermore, the algorithmic solutions that are generated by program synthesis potentially can support the composition of services, which result in the creation of executable scientific workflows.

The ultimate goal of this work is to provide an end-to-end solution for scientists beginning with modeling the processes for creating work products in terminology from the scientistís own domains and ending with component-based applications that can be used to automate processes that can advance their scientific endeavors. These applications can exploit distributed components that are supported by current cyber-infrastructure efforts. This paper discusses extensions to the WDO approach that support program synthesis. To elucidate this scenario, an example related to earth sciences is presented.

File in pdf


Technical Report UTEP-CS-08-20, April 2008
Asymmetric Paternalism: Description of the Phenomenon, Explanation Based on Decisions Under Uncertainty, and Possible Applications to Education
Olga Kosheleva and Francois Modave

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

In general, human being are rational decision makers, but in many situations, they exhibit unexplained "inertia", reluctance to switch to a better decision. In this paper, we show that this seemingly irrational behavior can be explained if we take uncertainty into account; we also explain how this phenomenon can be utilized in education.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-19, March 2008
Relations between Interval Computing and Soft Computing
Vladik Kreinovich

Published in: Chenyi Hu, R. Baker Kearfott, Andre de Korvin, and Vladik Kreinovich (eds.), Knowledge Processing with Interval and Soft Computing, Springer Verlag, London, 2008, pp. 75-97.

This paper starts with a brief reminder of why data processing and knowledge processing are needed in the first place, why interval and fuzzy methods are needed for data and knowledge processing, and which of the possible data and knowledge processing techniques we should use. Then, we explain how these reasonable soft computing techniques are naturally related with interval computing. Finally, we explain the need for interval-valued fuzzy techniques -- techniques which will be used a lot in our future applications -- and how the transition to such techniques is also related to interval computing.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-18, March 2008
Updated version UTEP-08-18a, March 2008
How to Measure a Degree of Mismatch Between Probability Models, p-Boxes, etc.: A Decision-Theory-Motivated Utility-Based Approach
Luc Longpre, Scott Ferson, and W. Troy Tucker

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

Different models can be used to describe real-life phenomena: deterministic, probabilistic, fuzzy, models in which we have interval-valued or fuzzy-valued probabilities, etc. Models are usually not absolutely accurate. It is therefore important to know how accurate is a given model. In other words, it is important to be able to measure a mismatch between the model and the empirical data. In this paper, we describe an approach of measuring this mismatch which is based on the notion of utility, the central notion of utility theory.

We also show that a similar approach can be used to measure the loss of privacy.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-17, March 2008
Updated version UTEP-CS-08-17a, March 2008
Towards an Optimal Algorithm for Computing Fixed Points: Dynamical Systems Approach, With Applications to Transportation Engineering
Ruey L. Cheu, Gang Xiang, and Vladik Kreinovich

Published in Proceedings of the Sixth EUROMECH Nonlinear Dynamics Conference ECON'08, St. Petersburg, Russia, June 30 - July 4, 2008.

In many practical problems, it is desirable to find an equilibrium. For example, equilibria are important in transportation engineering.

Many urban areas suffer from traffic congestion. Intuitively, it may seem that a road expansion (e.g., the opening of a new road) should always improve the traffic conditions. However, in reality, a new road can actually worsen traffic congestion. It is therefore extremely important that before we start a road expansion project, we first predict the effect of this project on traffic congestion.

When a new road is built, some traffic moves to this road to avoid congestion on the other roads; this causes congestion on the new road, which, in its turn, leads drivers to go back to their previous routes, etc. What we want to estimate is the resulting equilibrium.

In many problems -- e.g., in many transportation problems -- natural iterations do not converge. It turns out that the convergence of the corresponding fixed point iterations can be improved if we consider these iterations as an approximation to the appropriate dynamical system.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-16, March 2008
Updated version UTEP-CS-08-16a, March 2008
Estimating Variance under Interval and Fuzzy Uncertainty: Parallel Algorithms
Karen Villaverde and Gang Xiang

Published in Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2008, Hong Kong, China, June 1-6, 2008, pp. 1030-1033.

Traditional data processing in science and engineering starts with computing the basic statistical characteristics such as the population mean E and population variance V. In computing these characteristics, it is usually assumed that the corresponding data values x1,...,xn are known exactly. In many practical situations, we only know intervals [xi-,xi+] that contain the actual (unknown) values of xi or, more generally, a fuzzy number that describes xi. In this case, different possible values of xi lead, in general, to different values of E and V. In such situations, we are interested in producing the intervals of possible values of E and V -- or fuzzy numbers describing E and V. There exist algorithms for producing such interval and fuzzy estimates. However, these algorithms are more complex than the typical data processing formulas and thus, require a larger amount of computation time. If we have several processors, then, it is desirable to perform these algorithms in parallel on several processors, and thus, to speed up computations. In this paper, we show how the algorithms for estimating variance under interval and fuzzy uncertainty can be parallelized.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-15, March 2008
Updated version UTEP-CS-08-15a, March 2008
How to Reconcile Physical Theories with the Idea of Free Will: From Analysis of a Simple Model to Interval and Fuzzy Approaches
Julio C. Urenda and Olga Kosheleva

Published in Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2008, Hong Kong, China, June 1-6, 2008, pp. 1024-1029.

Most modern physical theories are formulated in terms of differential equations. As a result, if we know exactly the current state of the world, then this state uniquely determines all the future events -- including our own future behavior. This determination seems to contradict the intuitive notion of a free will, according to which we are free to make decisions -- decisions which cannot be determined based on the past locations and velocities of the elementary particles. In quantum physics, the situation is somewhat better in the sense that we cannot determine the exact behavior, but we can still determine the quantum state, and thus, we can determine the probabilities of different behaviors -- which is still inconsistent with our intuition.

This inconsistency does not mean, of course, that we can practically predict our future behavior; however, in view of many physicists and philosophers, even the theoretical inconsistency is somewhat troubling. Some of these researchers feel that it is desirable to modify physical equations in such a way that such a counter-intuitive determination would no longer be possible.

In this paper, we analyze the foundations for such possible theories, and show that on the level of simple mechanics, the formalization of a free will requires triple interactions -- while traditional physics is based on pairwise interactions between the particles.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-14, March 2008
Updated version UTEP-CS-08-14a, March 2008
Selecting the Most Representative Sample is NP-Hard: Need for Expert (Fuzzy) Knowledge
J. Esteban Gamez, Francois Modave, and Olga Kosheleva

Published in Proceedings of the IEEE World Congress on Computational Intelligence WCCI'2008, Hong Kong, China, June 1-6, 2008, pp. 1069-1074.

One of the main applications of fuzzy techniques is to formalize the notions of "typical", "representative", etc.

The main idea behind fuzzy techniques is that they formalize expert knowledge expressed by words from natural language.

In this paper, we show that if we do not use this knowledge, i.e., if we only use the data, then selecting the most representative sample becomes a computationally difficult (NP-hard) problem. Thus, the need to find such samples in reasonable time justifies the use of fuzzy techniques.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-13, March 2008
Revised version UTEP-CS-08-13a, March 2008
Beyond Intervals: Phase Transitions Lead to More General Ranges
Karen Villaverde and Gilbert Ornelas

Short version published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008; full version published in Journal of Uncertain Systems, 2010, Vol. 4, No. 4, pp. 312-319.

One of the main tasks of science and engineering is to use the current values of the physical quantities for predicting the future values of the desired quantities. Due to the (inevitable) measurement inaccuracy, we usually know the current values of the physical quantities with interval uncertainty. Traditionally, it is assumed that all the processes are continuous; as a result, the range of possible values of the future quantities is also known with interval uncertainty. However, in many practical situations (such as phase transitions), the dependence of the future values on the current ones becomes discontinuous. We show that in such cases, initial interval uncertainties can lead to arbitrary bounded closed ranges of possible values of the future quantities. We also show that the possibility of such a discontinuity may drastically increase the computational complexity of the corresponding range prediction problem.

Original file in pdf and in Compressed Postscript
Revised version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-12, March 2008
Updated version UTEP-CS-08-12a, March 2008
Applications of 1-D Versions of Image Referencing Techniques to Hydrology and to Patient Rehabilitation
Roberto Araiza, Martine Ceberio, Naga Suman Kanagala, Vladik Kreinovich, and Gang Xiang

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

In this paper, we consider two seemingly unrelated problems: the hydrology problem of relation between groundwater and surface water, and a problem of identification of human gait in neuro-rehabilitation. It turns out that in both problems, we can efficiently use soft computing-motivated algorithms originally developed for image referencing.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-11, March 2008
Updated version UTEP-CS-08-11a, March 2008
Everything Is a Matter of Degree: A New Theoretical Justification of Zadeh's Principle
Hung T. Nguyen and Vladik Kreinovich

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

One of the main ideas behind fuzzy logic and its applications is that everything is a matter of degree. We are often accustomed to think that every statement about a physical world is true or false -- that an object is either a particle or a wave, that a person is either young or not, either well or ill -- but in reality, we sometimes encounter intermediate situations. In this paper, we show that the existence of such intermediate situations can be theoretically explained -- by a natural assumption that the real world is cognizable.

Original version in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-11c, August 2008
Intermediate Degrees are Needed for the World to be Cognizable: Towards a New Justification for Fuzzy Logic Ideas
Hung T. Nguyen, Vladik Kreinovich, J. Esteban Gamez, Francois Modave, and Olga Kosheleva

Published in Aboul-Ella Hassanien and Ajith Abraham (ed.), Foundations of Computational Intelligence, Springer-Verlag, 2009, Vol. 2, pp. 53-74.

Most traditional examples of fuzziness come from the analysis of commonsense reasoning. When we reason, we use words from natural language like "young", "well". In many practical situations, these words do not have a precise true-or-false meaning, they are fuzzy. One may therefore be left with an impression that fuzziness is a subjective characteristic, it is caused by the specific way our brains work.

However, the fact that that we are the result of billions of years of successful adjusting-to-the-environment evolution makes us conclude that everything about us humans is not accidental. In particular, the way we reason is not accidental, this way must reflect some real-life phenomena -- otherwise, this feature of our reasoning would have been useless and would not have been abandoned long ago.

In other words, the fuzziness in our reasoning must have an objective explanation -- in fuzziness of the real world.

In this paper, we first give examples of objective real-world fuzziness. After these example, we provide an explanation of this fuzziness -- in terms of cognizability of the world.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-10, March 2008
Updated short version UTEP-CS-08-10a, March 2008
Final version UTEP-CS-10c, July 2009
Square Root of "Not": A Major Difference Between Fuzzy and Quantum Logics
Vladik Kreinovich, Ladislav J. Kohout, and Eunjin Kim

Short version published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008; full paper published in International Journal of General Systems, 2011, Vol. 40, No. 1, pp. 111-127.

Many authors have mentioned the similarity between quantum logic and fuzzy logic. In this paper, we show that, in spite of this similarity, these logics are not identical. Specifically, we emphasize that while quantum logic has a special ``square root of not'' operation which is very useful in quantum computing, fuzzy logic lacks such an operation.

Original file in pdf and in Compressed Postscript
Updated short version in pdf and in Compressed Postscript
Final version in pdf


Technical Report UTEP-CS-08-09, March 2008
CAHSI Year 2 Evaluation Report: Recruiting, Retaining, and Advancing Hispanics in Computing
Heather Thiry, Sarah Hug, and Lecia Barker

File in pdf


Technical Report UTEP-CS-08-08, March 2008
Updated short version UTEP-08-08a, March 2008
Final version UTEP-08-08c, July 2009
Quantum Computations Techniques for Gauging Reliability of Interval and Fuzzy Data
Luc Longpre, Christian Servin, and Vladik Kreinovich

Short version published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008; full paper published in International Journal of General Systems, 2011, Vol. 40, No. 1, pp. 99-109.

In traditional interval computations, we assume that the interval data corresponds to guaranteed interval bounds, and that fuzzy estimates provided by experts are correct. In practice, measuring instruments are not 100% reliable, and experts are not 100% reliable, we may have estimates which are "way off", intervals which do not contain the actual values at all. Usually, we know the percentage of such outlier un-reliable measurements. However, it is desirable to check that the reliability of the actual data is indeed within the given percentage. The problem of checking (gauging) this reliability is, in general, NP-hard; in reasonable cases, there exist feasible algorithms for solving this problem. In this paper, we show that quantum computations techniques can drastically speed up the computation of reliability of given data.

Original file in pdf and in Compressed Postscript
Updated short version in pdf and in Compressed Postscript
Final version in pdf


Technical Report UTEP-CS-08-07, February 2008
Integrating Random Testing with Constraints for Improved Efficiency and Diversity
Yoonsik Cheon, Antonio Cortes, Martine Ceberio, and Gary T. Leavens

Random testing can be fully automated, eliminates subjectiveness in constructing test cases, and increases the diversity of test data. However, randomly generated tests may not satisfy program's assumptions (e.g., method preconditions). While constraint solving can satisfy such assumptions, it does not necessarily generate diverse tests and is hard to apply to large programs.

We blend these techniques by extending random testing with constraint solving, improving the efficiency of generating valid test data while preserving diversity. For domains such as objects, we generate input values randomly; however, for values of finite domains such as integers, we represent test data generation as a constraint satisfaction problem by solving constraints extracted from the precondition of the method under test. We also increased the diversity of constraint-based solutions by incorporating randomness into the solver's enumeration process. In our experimental evaluation we observed an average improvement of 80 times without decreasing test case diversity, measured in terms of the time needed to generate a given number of test cases.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-06, February 2008
Identification of Human Gait in Neuro-Rehabilitation: Towards Efficient Algorithms
Naga Suman Kanagala, Martine Ceberio, Thompson Sarkodie-Gyan, Vladik Kreinovich, and Roberto Araiza

Published in: Homer Nazeran, Michael Goldman, and Richard Schoephoerster, Biomedical Engineering: Recent Developments, Medical and Engineering Publishers, 2008, pp. 153-156.

Many neurological diseases such as stroke, traumatic body injury, spinal cord injury drastically decrease the patient's ability to walk without physical assistance. To re-establish normal gait, patients undergo extensive rehabilitation. At present, rehabilitation requires gait assessment by highly qualified experienced clinicians. To make rehabilitations easier to access and to decrease the rehabilitation cost, it is desirable to automate gait assessment. In precise terms, gait assessment means comparing the recorded patient's gait with a standard (average) gait of healthy people of the same body measurements. One of the problems in this comparison is that patients walk slower; so, to properly compare gaits, we must first appropriately "scale" the standard gait so that it best matches the speed with which the patient's walk. One possibility is to try all possible scalings but this is computationally very intensive. In this paper, we adjust the known fast image referencing techniques to design a fast algorithm that uses Fast Fourier Transform for finding the optimal scaling.

File in pdf


Technical Report UTEP-CS-08-05, February 2008
A Library-Based Approach to Translating OCL Constraints to JML Assertions for Runtime Checking
Carmen Avila, Guillermo Flores, Jr., and Yoonsik Cheon

OCL is a formal notation to specify constraints on UML models that cannot otherwise be expressed by diagrammatic notations such as class diagrams. Using OCL one can document detailed design decisions and choices along with the behavior, e.g., class invariants and method pre and postconditions. However, OCL constraints cannot be directly executed and checked at runtime by an implementation, thus constraint violations may not be detected or noticed, causing many potential development and maintenance problems. In this paper we propose an approach to checking OCL constraints at runtime by translating them to executable JML assertions. The key components of our approach are a set of JML library classes, use of model variables, and a separation of JML assertions from source code. The library classes implement OCL collection types and facilitate a direct mapping from OCL constraints to JML assertions by using model variables. The translated JML assertions are stored in specification files, separate from source code files, to ease change management of OCL constraints and Java source code. Our approach also facilitates a seamless transition from OCL-based designs to Java implementations.

File in pdf


Technical Report UTEP-CS-08-04, February 2008
Updated version UTEP-CS-08-04a, March 2008
Towards Fast Algorithms for Processing Type-2 Fuzzy Data: Extending Mendel's Algorithms From Interval-Valued to a More General Case
Vladik Kreinovich and Gang Xiang

Published in Proceedings of the 27th International Conference of the North American Fuzzy Information Processing Society NAFIPS'2008, New York, New York, May 19-22, 2008.

It is known that processing of data under general type-1 fuzzy uncertainty can be reduced to the simplest case -- of interval uncertainty: namely, Zadeh's extension principle is equivalent to level-by-level interval computations applied to alpha-cuts of the corresponding fuzzy numbers.

However, type-1 fuzzy numbers may not be the most adequate way of describing uncertainty, because they require that an expert can describe his or her degree of confidence in a statement by an exact value. In practice, it is more reasonable to expect that the expert estimates his or her degree by using imprecise words from natural language -- which can be naturally formalized as fuzzy sets. The resulting type-2 fuzzy numbers more adequately represent the expert's opinions, but their practical use is limited by the seeming computational complexity of their use. In his recent research, J. Mendel has shown that for the practically important case of interval-valued fuzzy sets, processing such sets can also be reduced to interval computations. In this paper, we show that Mendel's idea can be naturally extended to arbitrary type-2 fuzzy numbers.

Original file in pdf and in Compressed Postscript
Updated version in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-03, February 2008
Extracting Computable Bounds (and Algorithms) from Classical Existence Proofs: Girard Domains Enable Us to Go Beyond Local Compactness
Vladik Kreinovich and Karen Villaverde

In classical mathematics, the existence of a solution is often proven indirectly, non-constructively, without an efficient method for constructing the corresponding object. In many cases, we can extract an algorithm from a classical proof: e.g., when an object is (non-constructively) proven to be unique in a locally compact space (or when there are two such objects with a known lower bound on the distance between them). In many other practical situations, a (seemingly) natural formalization of the corresponding practical problem leads to a non-compact set. In this paper, we show that often, in such situations, we can extract efficient algorithms from classical proofs -- if we explicitly take into account (implicit) knowledge about the situation. Specifically, we show that if we consistently apply Heisenberg's operationalism idea and define objects in terms of directly measurable quantities, then we get a Girard-domain type representation in which a natural topology is, in effect, compact -- and thus, uniqueness implies computability.

File in pdf and in Compressed Postscript


Technical Report UTEP-CS-08-01, January 2008
Equidecomposability (scissors congruence) of polyhedra in R^3 and R^4 is algorithmically decidable: Hilbert's 3rd Problem revisited
Vladik Kreinovich

Published in Geombinatorics, 2008, Vol. 18, No. 1, pp. 26-34.

File in pdf and in Compressed Postscript