University of Texas at El Paso
Computer Science Department
Abstracts of 2016 Reports


Technical Report UTEP-CS-16-106, December 2016
Updated version UTEP-16-106c, June 2017
Why Growth of Cancerous Tumors Is Gompertzian: A Symmetry-Based Explanation
Pedro Barragan Olague and Vladik Kreinovich

Published in Cybernetics and Physics, 2017, Vol. 6, No. 1, pp. 13-18.

It is known that the growth of a cancerous tumor is well described by the Gompertz's equation. The existing explanations for this equation rely on specifics of cell dynamics. However, the fact that for many different types of tumors, with different cell dynamics, we observe the same growth pattern, make us believe that there should be a more fundamental explanation for this equation. In this paper, we show that a symmetry-based approach indeed leads to such an explanation: indeed, out of all scale-invariant growth dynamics, the Gompertzian growth is the closest to the linear-approximation exponential growth model.

Original file UTEP-CS-16-106 in pdf
Updated version UTEP-CS-16-106c in pdf


Technical Report UTEP-CS-16-105, December 2016
Grading that Takes into Account the Need to Learn from Mistakes
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

To appear in Journal of Uncertain Systems

It is well known that the best way to learn the new material is to try it, to make mistakes, and to learn from these mistakes. However, the current grading scheme, in which the overall grade is a weighted average of the grades for all the assignments, exams, etc., does not encourage mistakes: any mistake decreases the grade on the corresponding assignment and thus, decreases the overall grade for the class. It is therefore desirable to modify the usual grading scheme, so that it will take into account -- and encourage -- learning by mistakes. Such a modification is proposed in this paper.

Specifically, we suggest that the overall grade be -- as now -- the weighted average of the grades corresponding to different parts of the material, but each of these parts-grades is now calculated differently: instead of the weighted average of grades corresponding to different assignments in which this material is tested, we suggest using the largest of the grades corresponding to all these assignments.

File in pdf


Technical Report UTEP-CS-16-104, December 2016
Optimal Group Decision Making Criterion and How It Can Help to Decrease Poverty, Inequality, and Discrimination
Vladik Kreinovich and Thongchai Dumrongpokaphoan

Published in: Mikael Collan and Janusz Kacprzyk (editors), Soft Computing Applications for Group Decision Making and Consensus Modeling, Springer Verlag, 2018, pp. 3-19.

Traditional approach to group decision making in economics is to maximize the GDP, i.e., the overall gain. The hope behind this approach is that the increased wealth will trickle down to everyone. Sometimes, this happens, but often, in spite of an increase in overall GDP, inequality remains: some people remain poor, some groups continue to face economic discrimination, etc. This shows that maximizing the overall gain is probably not always the best criterion in group decision making. In this chapter, we find a group decision making criterion which is optimal (in some reasonable sense), and we show that using this optimal criterion can indeed help to decrease poverty, inequality, and discrimination.

File in pdf


Technical Report UTEP-CS-16-103, December 2016
Updated version UTEP-CS-16-103a, April 2016
Fuzzy Techniques Explain Empirical Power Law Governing Wars and Terrorist Attacks
Hung T. Nguyen, Kittawit Autchariyapanitkul, and Vladik Kreinovich

Published in Proceedings of the Joint 17th Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems, Otsu, Japan, June 27-30, 2017.

The empirical distribution of the number of casualties in wars and terrorist attacks follows a power law with exponent 2.5. So far, there has not been a convincing explanation for this empirical fact. In this paper, we show that by using fuzzy techniques, we can explain this exponent. Interesting, we can also get a similar explanation if we use probabilistic techniques. The fact that two different techniques lead to the same explanation makes us reasonably confident that this explanation is correct.

Original file UTEP-CS-16-103 in pdf
Updated version UTEP-CS-16-103a in pdf


Technical Report UTEP-CS-16-102, December 2016
A Modification of Backpropagation Enables Neural Networks to Learn Preferences
Martine Ceberio and Vladik Kreinovich

To appear in Journal of Uncertain Systems

To help a person make proper decisions, we must first understand the person's preferences. A natural way to determine these preferences is to learn them from the person's choices. In principle, we can use the traditional machine learning techniques: we start with all the pairs (x,y) of options for which we know the person's choices, and we train, e.g., the neural network to recognize these choices. However, this process does not take into account that a rational person's choices are consistent: e.g., if a person prefers a to b and b to c, this person should also prefer a and c. Since the usual learning algorithms do not take this consistency into account, the resulting choice-prediction algorithm may be inconsistent. It is therefore desirable to explicitly take consistency into account when training the network. In this paper, we show how this can be done.

File in pdf


Technical Report UTEP-CS-16-101, December 2016
Fuzzy Data Processing Beyond Min t-Norm
Andrzej Pownuk, Vladik Kreinovich, and Sonsgak Sriboonchitta

To appear in: In: Christian Berger-Vachon, Ana María Gil Lafuente, Janusz Kacprzyk, Yuriy Kondratenko, Jose M. Merigo Lindahl, and Carlo Morabito (eds.), Complex Systems: Solutions and Challenges in Economics, Management, and Engineering, Springer Verlag.

Usual algorithms for fuzzy data processing -- based on the usual form of Zadeh's extension principle -- implicitly assume that we use the min "and"-operation (t-norm). It is known, however, that in many practical situations, other t-norms more adequately describe human reasoning. It is therefore desirable to extend the usual algorithms to situations when we use t-norms different from min. Such an extension is provided in this paper.

File in pdf


Technical Report UTEP-CS-16-100, December 2016
Specifying a Global Optimization Solver in Z
Angel F. Garcia Contreras and Yoonsik Cheon

NumConSol is an interval-based numerical constraint and optimization solver to find a global optimum of a function. It is written in Python. In this document, we specify the NumConSol solver in Z, a formal specification language based on sets and predicates. The aim is to provide a solid foundation for restructuring and refactoring the current implementation of the NumConSol solver as well as facilitating its future improvements. The formal specification also allows us to design more effective testing for the solver, e.g., generating test cases from the specification.

File in pdf


Technical Report UTEP-CS-16-99, December 2016
A Simple Geometric Explanation of Occam's Razor
Olga Kosheleva and Vladik Kreinovich

Published in Geombinatorics, 2017, Vol. 27, No. 1, pp. 15-19.

Occam's razor states that out of possible explanations, plans, and designs, we should select the simplest one. It turns out that in many practical situations, the simplest explanation indeed turns out to be the correct one, the simplest plan is often the most successful, etc. But why this happens is not very clear. In this paper, we provide a simple geometric explanation of Occam's razor.

File in pdf


Technical Report UTEP-CS-16-98, December 2016
For Fuzzy Logic, Occam's Principle Explains the Ubiquity of the Golden Ratio and of the 80-20 Rule
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2017, Vol. 4, No. 1, pp. 13-18.

In this paper, we show that for fuzzy logic, the Occam's principle -- that we should always select the simplest possible explanation -- explains the ubiquity of the golden ratio and of the 80-20 rule.

File in pdf


Technical Report UTEP-CS-16-97, December 2016
Why RSA? A Pedagogical Comment
Pedro Barragan Olague, Olga Kosheleva, and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2017, Vol. 1, No. 1, pp. 19-24.

One of the most widely used cryptographic algorithms is the RSA algorithm in which a message m encoded as the remainder c of me modulo n, where n and e are given numbers -- forming a public code. A similar transformation cd mod n$, for an appropriate secret code d, enables us to reconstruct the original message. In this paper, we provide a pedagogical explanation for this algorithm.

File in pdf


Technical Report UTEP-CS-16-96, December 2016
What Is the Best Way to Add Large Number of Integers: Number-by-Number As Computers Do Or Lowest-Digits-Than-Next-Digits-Etc As We Humans Do?
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 42, pp. 115-118.

When we need to add several integers, computers add them one by one, while we usually add them digit by digit: first, we add all the lowest digits, then we add all next lowest digits, etc. Which way is faster? Should we learn from computers or should we teach computers to add several integers our way?

In this paper, we show that the computer way is faster. This adds one more example to the list of cases when computer-based arithmetic algorithms are much more efficient than the algorithms that we humans normally use.

File in pdf


Technical Report UTEP-CS-16-95, December 2016
How to Make Machine Learning Robust Against Adversarial Inputs
Gerardo Muela, Christian Servin, and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 42, pp. 127-130.

It has been recently shown that it is possible to "cheat" many machine learning algorithms -- i.e., to perform minor modifications of the inputs that would lead to a wrong classification. This feature can be used by adversaries to avoid spam detection, to create a wrong identification allowing access to classified information, etc. In this paper, we propose a solution to this problem: namely, instead of applying the original machine learning algorithm to the original inputs, we should first perform a random modification of these inputs. Since machine learning algorithms perform well on random data, such a random modification ensures us that the algorithm will, with a high probability, work correctly on the modified inputs. An additional advantage of this idea is that it also provides an additional privacy protection.

File in pdf


Technical Report UTEP-CS-16-94, December 2016
Yes- and No-Gestures Explained by Symmetry
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 41, pp. 127-129.

In most cultures, "yes" is indicate by a vertical head movement (nod), while "no" is indicated by a left-right movement (shake). In this paper, we show that basic symmetries can explain this cultural phenomenon.

File in pdf


Technical Report UTEP-CS-16-93, December 2016
Towards Decision Making under Interval Uncertainty
Andrzej Pownuk and Vladik Kreinovich

To appear in Journal of Uncertainty Systems.

In many practical situations, we know the exact form of the objective function, and we know the optimal decision corresponding to each values of the corresponding parameters xi. What should we do if we do not know the exact values of xi, and instead, we only know each xi with uncertainty -- e.g., with interval uncertainty? In this case, one of the most widely used approaches is to select, for each i, one value from the corresponding interval -- usually, a midpoint -- and to use the exact-case optimal decision corresponding to the selected values. Does this approach lead to the optimal solution to the interval-uncertainty problem? If yes, is selecting the midpoints the best idea? In this paper, we provide answers to these questions. It turns out that the selecting-a-valued-from-each-interval approach can indeed lead us to the optimal solution for the interval problem -- but not if we select midpoints.

File in pdf


Technical Report UTEP-CS-16-92, December 2016
Why Product "And"-Operation Is Often Efficient: One More Argument
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2017, Vol. 4, No. 1, pp. 25-28.

It is an empirical fact that the algebraic product is one the most efficient "and"-operations in fuzzy logic. In this paper, we provide one of the possible explanations of this empirical phenomenon.

File in pdf


Technical Report UTEP-CS-16-91, December 2016
A Heuristic Solution of the Toll Optimal Problem With Congestion Affected Costs
Vyacheslav Kalashnikov, Jose G. Florez-Muniz, Nataliya Kalashnykova, and Vladik Kreinovich

Published in the Proceedings of the International Conference on Logistics and Supply Chains CiLOG'2016, Merida, Mexico, October 5-7, 2016.

An important problem concerning the toll roads is the setting of appropriate costs for driving along paid arcs of a transportation network. Our paper treats this problem as a bilevel programming model. At the upper level, decisions are made by a public regulator/private company that administers the toll roads endeavoring to elevate their benefits. At the lower level, several transportation companies/individual users appease the existing demand for transportation of goods or passengers while selecting the routes that would minimize their total travel costs. In contrast to the previous models, here the lower level problem assumes quadratic costs implied by the possible traffic congestion. Aiming to find a solution to the bilevel programming problem, a plain method based on sensitivity analysis for quadratic programs is brought forward. In order to "jump" (if necessary) from a local maximum of the upper level objective function to a vicinity of another, the "filled function" move is applied. The proposed algorithms are genuine and work efficiently enough when employed to solve small- and medium-sized test numerical problems.

File in pdf


Technical Report UTEP-CS-16-90, December 2016
Towards an Algebraic Description of Set Arithmetic
Olga Kosheleva and Vladik Kreinovich

To describe the state of the world, we need to describe the values of all physical quantities. In practice, due to inevitable measurement inaccuracy, we do not know the exact values of these quantities, we only know the sets of possible values for these quantities. On the class of such uncertainty-related sets, we can naturally define arithmetic operations that transform, e.g., uncertainty in a and b into uncertainty with which we know the sum a + b.

In many applications, it has been useful to reformulate the problem in purely algebraic terms, i.e., in terms of axioms that the basic operations must satisfy: there are useful applications of groups, rings, fields, etc. From this viewpoint, it is desirable to be able to describe the class of uncertainty-related sets with the corresponding arithmetic operations in algebraic terms. In this paper, we provide such a representation.

Our representation has the same complexity complexity as the usual algebraic description of a field (such as the field of real numbers).

File in pdf


Technical Report UTEP-CS-16-89, December 2016
Structure of Filled Functions: Why Gaussian and Cauchy Templates Are Most Efficient
Vyacheslav Kalashnikov, Vladik Kreinovich, Jose Guadalupe Flores-Muniz, and Nataliya Kalashnykova

To appear in International Journal of Combinatorial Optimization and Informatics

One of the main problems of optimization algorithms is that they often end up in a local optimum. It is, therefore, necessary to make sure that the algorithm gets out of the local optimum and eventually reaches the global optimum. One of the promising ways guiding one from the local optimum is prompted by the filled function method. It turns out that empirically, the best smoothing functions to use in this method are the Gaussian and Cauchy functions. In this paper, we provide a possible theoretical explanation of this empirical effect.

File in pdf


Technical Report UTEP-CS-16-88, December 2016
Fuzzy Pareto Solution in multi-criteria group decision making with intuitionistic linguistic preference relation
Bui Cong Cuong, Vladik Kreinovich, Le Hoang Son, and Nilanjan Dey

To appear in International Journal of Fuzzy System Applications

In this paper, we investigate the multi criteria group decision making with intuitionistic linguistic preference relation. The concept of Fuzzy Collective Solution (FCS) is used to evaluate and rank the candidate solution sets for modeling under linguistic assessments. Intuitionistic linguistic preference relation and associated aggregation procedures are then defined in a new concept of Fuzzy Pareto Solution. Numerical examples are presented to demonstrate computing procedures. The results affirm efficiency of the proposed method.p> File in pdf


Technical Report UTEP-CS-16-87, December 2016
When Invading, Cancer Cells Do Not Divide: A Geometric (Symmetry-Based) Explanation of an Empirical Observation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 41, pp. 122-126.

In general, malignant tumors are known to grow fast, cancer cells that form these tumors divide and spread around. Tumors also experience the process of metastasis, when cancer cells invade neighboring organs. A recent experiment has shown that, contrary to the previous assumptions, when cancer cells are invading, they stop dividing. In this paper, we provide a geometric explanation for this empirical phenomenon.

File in pdf


Technical Report UTEP-CS-16-86, December 2016
Why the Presence of Point-Wise ("Punctate") Calcifications or Linear Configurations of Calcifications Makes Breast Cancer More Probable: A Geometric Explanation
Olga Kosheleva and Vladik Kreinovich

When a specialist analyzes a mammogram for signs of possible breast cancer, he or she pays special attention to point-wise and linear-shaped calcifications and point-wise and linear configurations of calcification -- since empirically, such calcifications and combinations of calcifications are indeed most frequently associated with cancer. In this paper, we provide a geometric explanation for this empirical phenomenon.

File in pdf


Technical Report UTEP-CS-16-85, December 2016
Why Most Bright Stars Are Binary But Most Dim Stars Are Single: A Simple Qualitative Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 41, pp. 118-121.

It is known that most visible stars are binary: they have a nearby companion star, and these two stars orbit around each other. Based on this fact, until recently, astronomers believed that, in general, most stars are binary. A few years ago, a surprising paper showed that while most bright stars are indeed binary, most dim stars are single. In this paper, we provide a simple qualitative explanation for this empirical fact.

File in pdf


Technical Report UTEP-CS-16-84, November 2016
Recent Changes in Enrollment in CAHSI Departments
Heather Thiry and Sarah Hug

CAHSI departments have experienced CS enrollment growth rates that range from modest to large. Five departments reported high growth rates, ranging from 10-25% per year in recent year, two departments have experienced a "modest increase" and one university has experienced a decline in growth. This decline was in a computer engineering department, rather than computer science, and the decline was attributed to the fact that the university had added two new majors in computing and some students are now choosing one of those majors over the traditional computer engineering degree. In spring 2016, CAHSI evaluators convened a focus group with CAHSI PIs to collect data on the challenges and opportunities faced by CAHSI departments in a time of high enrollment growth in computer science departments nationally.

File in pdf


Technical Report UTEP-CS-16-83, November 2016
Writing JML Specifications Using Java 8 Streams
Yoonsik Cheon, Zejing Cao and Khandoker Rahad

JML is a formal behavioral interface specification language for Java to document Java program modules such as classes and interfaces. When composing JML specifications, one frequently writes assertions involving a collection of values. In this paper we propose to use Java 8 streams for writing more concise and cleaner assertions on a collection. The use of streams in JML can be minimal and non-invasive in the conventional style of writing assertions. It can also be holistic to write all assertions in the abstract state defined by streams. We perform a small case study to illustrate our approach and show its effectiveness as well. We then summarize our findings and the lessons that we learned from the case study.

File in pdf


Technical Report UTEP-CS-16-82, November 2016
It Is Advantageous to Make a Syllabus As Precise As Possible: Decision-Theoretic Analysis
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2017, Vol. 4, No. 1, pp. 1-5.

Should a syllabus be precise? Shall we indicate exactly how many points we should assign for each test and for each assignment? On the one hand, many students like such certainty. On the other hand, instructors would like to have some flexibility: if an assignment turns out to be more complex than expected, we should be able to increase the number of points for this assignment, and, vice versa, it it turns out to be simpler than expected, we should be able to decrease the number of points.

In this paper, we analyze this problem from a decision-theoretic viewpoint. Our conclusion is that while a little flexibility is OK, in general, it is beneficial to make a syllabus as precise as possible.

File in pdf


Technical Report UTEP-CS-16-81, November 2016
Why Multiplication Has Higher Priority than Addition: A Pedagogical Remark
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2017, Vol. 4, No. 1, pp. 7-11.

Traditionally, multiplication has higher priority over addition; this means that there is no need to add parentheses if we want to perform multiplication first, and we need to explicitly add parentheses if we want addition to be performed first. Why not use an alternative arrangement, in which addition has higher priority? In this paper, we explain the traditional priority arrangement by showing that in the general case, the traditional arrangement allows us to use fewer parentheses than the alternative one.

File in pdf


Technical Report UTEP-CS-16-80, November 2016
Revised version UTEP-CS_16-80a, November 2016
Final version UTEP-CS-16-80b, March 2017
Use of Machine Learning to Analyze and -- Hopefully -- Predict Volcano Activity
Justin Parra, Olac Fuentes, Elizabeth Anthony, and Vladik Kreinovich

To appear in Acta Polytechnica Hungarica

Volcanic eruptions cause significant loss of lives and property around the world each year. Their importance is highlighted by the sheer number of volcanoes for which eruptive activity is probable. These volcanoes are classified as in a state of unrest. The Global Volcano Project maintained by the Smithsonian Institution estimates that approximately 600 volcanoes, many proximal to major urban areas, are currently in this state of unrest. A spectrum of phenomena serve as precursors to eruption, including ground deformation, emission of gases, and seismic activity. The precursors are caused by magma upwelling from the Moho to the shallow (2-5 km) subsurface and magma movement in the volcano conduit immediately preceding eruption.

Precursors have in common the fundamental petrologic processes of melt generation in the lithosphere and subsequent magma differentiation. Our ultimate objective is to apply state-of-the-art machine learning techniques to volcano eruption forecasting. In this paper, we applied machine learning techniques to the precursor data, such as the 1999 eruption of Redoubt volcano, Alaska, for which a comprehensive record of precursor activity exists as USGS public domain files and global data bases, such as the Smithsonian Institution Global Volcanology Project and Aerocom (which is part of the HEMCO data base). As a result, we get geophysically meaningful results.

Original file UTEP-CS-16-80 in pdf
Updated version UTEP-CS-16-80a in pdf
Final version UTEP-CS-16-80b in pdf


Technical Report UTEP-CS-16-79, November 2016
Updated version UTEP-CS-16-79a, November 2016
Final version UTEP-CS-16-79b, March 2017
Gaussian and Cauchy Functions in the Filled Function Method -- Why and What Next: On the Example of Optimizing Road Tolls
Jose Guadalupe Flores Muniz, Vyacheslav V. Kalashnikov, Vladik Kreinovich, and Nataliya Kalashnykova

To appear in Acta Polytechnica Hungarica

In many practical problems, we need to find the values of the parameters that optimize the desired objective function. For example, for the toll roads, it is important to set the toll values that lead to the fastest return on investment.

There exist many optimization algorithms, the problem is that these algorithms often end up in a local optimum. One of the promising methods to avoid the local optima is the filled function method, in which we, in effect, first optimize a smoothed version of the objective function, and then use the resulting optimum to look for the optimum of the original function. It turns out that empirically, the best smoothing functions to use in this method are the Gaussian and the Cauchy functions. In this paper, we show that from the viewpoint of computational complexity, these two smoothing functions are indeed the simplest.

The Gaussian and Cauchy functions are not a panacea: in some cases, they still leave us with a local optimum. In this paper, we use the computational complexity analysis to describe the next-simplest smoothing functions which are worth trying in such situations.

Original file UTEP-CS-16-79 in pdf
Updated version UTEP-CS-16-79a in pdf
Final version UTEP-CS-16-79b in pdf


Technical Report UTEP-CS-16-78, November 2016
Uodated version UTEP-CS-16-78a, April 2017
Scaling-Invariant Description of Dependence Between Fuzzy Variables: Towards a Fuzzy Version of Copulas
Gerardo Muela, Vladik Kreinovich, and Christian Servin

Published in Proceedings of the Joint 17th Congress of International Fuzzy Systems Association and 9th International Conference on Soft Computing and Intelligent Systems, Otsu, Japan, June 27-30, 2017.

To get a general description of dependence between n fuzzy variables x1, ..., xn, we can use the membership function μ(x1, ..., xn) that describes, for each possible tuple of values (x1, ..., xn) to which extent this tuple is possible.

There are, however, many ways to elicit these degrees. Different elicitations lead, in general, to different numerical values of these degrees -- although, ideally, tuples which have a higher degree of possibility in one scale should have a higher degree in other scales as well. It is therefore desirable to come up with a description of the dependence between fuzzy variables that does not depend on the corresponding procedure and, thus, has the same form in different scales. In this paper, by using an analogy with the notion of copulas in statistics, we come up with such a scaling-invariant description.

Our main idea is to use marginal membership functions

μi(xi) = maxx1, ..., , xi − 1, xi + 1, ..., xn μ(x1, ..., xi − 1, xi, xi + 1, ..., xn),
and then describe the relationship between the fuzzy variables x1, ..., xn by a function ri(x1, ..., xn) for which, for all the tuples (x1, ..., xn), we have
μ(x1, ..., xn)=μi(ri(x1, ..., xn)).

Original file UTEP-CS-16-78 in pdf
Updated version UTEP-CS-16-78a in pdf


Technical Report UTEP-CS-16-77, November 2016
A Simplified Derivation of Confidence Regions Based on Inferential Models
Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 125-128.

Recently, a new inferential models approach has been proposed for statistics. Specifically, this approach provides a new random-set-based way to come up with confidence regions. In this paper, we show that the confidence regions obtained by using the main version of this new methodology can also be naturally obtained directly, without invoking random sets.

File in pdf


Technical Report UTEP-CS-16-76, October 2016
Are Java Programming Best Practices Also Best Practices for Android?
Yoonsik Cheon

Android apps are written in Java. Android beginners assume that Java programming best practices are equally applicable to Android programming. In this paper, we perform a small case study to show that the assumption can be wrong. We port a well-written Java application to Android. A certain key assumption of object-oriented programming doesn't hold on the Android platform. Thus, some of the best practices in writing Java programs are not best practices for Android. In fact, they are anti-patterns that Android programmers should avoid. We show concrete examples of these anti-patterns or watch-outs along with their fixes.

File in pdf


Technical Report UTEP-CS-16-75, October 2016
Von Neumann-Morgenstern Solutions, Quantum Physics, and Stored Programs vs. Data: Unity of Von Neumann's Legacy
Olga Kosheleva, Martha Osegueda Escobar, and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 11, 2016, pp. 8-13.

In this paper, we show that several seemingly unrelated topics of John von Neumann's research are actually very closely related.

File in pdf


Technical Report UTEP-CS-16-74, October 2016
Cosmological Inflation: A Simple Qualitative Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 11, 2016, pp. 19-23.

In this paper, we provide a simple qualitative explanation of the cosmological inflation -- a phenomenon that at the beginning of the Universe, its size was exponentially increasing.

File in pdf


Technical Report UTEP-CS-16-73, October 2016
Why Utility Non-Linearly Depends on Money: A Commonsense Explanation
Olga Kosheleva, Mahdokht Afravi, and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 11, 2016, pp. 13-18.

Human decision making is based on the notion of utility. Empirical studies have shown that utility non-linearly depends on the money amount. In this paper, we provide a commonsense explanation of this empirical fact: namely, that without such non-linearity, we would not have a correct description of such a commonsense behavior as saving money for retirement.

File in pdf


Technical Report UTEP-CS-16-72, October 2016
How to Assign Numerical Values to Partially Ordered Levels of Confidence: Robustness Approach
Kimberly Kato

Published in Proceedings of the 4th International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 11, 2016, pp. 85-89.

In many practical situations, expert's levels of confidence are described by words from natural language, and these words are only partially ordered. Since computers are much more efficient processing numbers than words, it is desirable to assign numerical values to these degrees. Of course, there are many possible assignments that preserve order between words. It is reasonable to select an assignment which is the most robust, i.e., for which the largest possible deviation from the numerical values still preserves the order. In this paper, we describe such assignments for situations when we have 2, 3, and 4 different words.

File in pdf


Technical Report UTEP-CS-16-71, October 2016
Why Half-Frequency in Intelligent Compaction
Pedro Barragan Olague and Vladik Kreinovich

Published in Proceedings of the 4th International Conference on Mathematical and Computer Modeling, Omsk, Russia, November 11, 2016, pp. 23-26.

To gauge how well vibrating rollers have compacted the road segment, it is reasonable to process the acceleration measured by the attached sensors. Theoretically, we expect the resulting signal to be periodic with the same frequency f with which the roller vibrates -- and thus, after a Fourier transform, we expect to observe only frequencies which are multiples of the vibration frequency f.

Surprisingly, often, we also observe a peak at half-frequency f/2.

In this paper, we explain this empirical phenomenon: we show that it is a particular case of a spontaneous symmetry violation, and that the general physical theory of such symmetry violations explains why namely half-frequency signals are often observed.

File in pdf


Technical Report UTEP-CS-16-70, October 2016
Intuitionistic Fuzzy Logic Is Not Always Equivalent to Interval-Valued One
Christian Servin and Vladik Kreinovich

Published in Notes on Intuitionistic Fuzzy Sets, 2016, Vol. 22, No. 5, pp. 1-11.

It has been shown that from the purely mathematical viewpoint, the (traditional) intuitionistic fuzzy logic is equivalent to interval-valued fuzzy logic. In this paper, we show that if we go beyond the traditional "and"- and "or"-operations, then intuitionistic fuzzy logic becomes more general than the interval-valued one.

File in pdf


Technical Report UTEP-CS-16-69, October 2016
Preliminary Investigation of Mobile System Features Potentially Relevant to HPC
David Pruitt and Eric Freudenthal

Energy consumption's increasing importance in scientific computing has driven an interest in developing energy efficient high performance systems. Energy constraints of mobile computing has motivated the design and evolution of low-power computing systems capable of supporting a variety of compute-intensive user interfaces and applications. Others have observed the evolution of mobile devices to also provide high performance. Their work has primarily examined the performance and efficiency of compute-intensive scientific programs executed either on mobile systems or hybrids of mobile CPUs grafted into non-mobile (sometimes HPC) systems.

This report describes an investigation of performance and energy consumption of a single scientific code on five high performance and mobile systems with the objective of identifying the performance and energy efficiency implications of a variety of architectural features. The results of this pilot study suggest that ISA is less significant than other specific aspects of system architecture in achieving high performance at high efficiency. The strategy employed in this study may be extended to other scientific applications with a variety of memory access, computation, and communication properties.

File in pdf


Technical Report UTEP-CS-16-68, September 2016
Why Pairwise Testing Works So Well: A Possible Theoretical Explanation of an Empirical Phenomenon
Francisco Zapata and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2017, Vol. 41, pp. 130-134.

Some software defects can be detected only if we consider all possible combinations of three, four, or more inputs. However, empirical data shows that the overwhelming majority of software defects are detected during pairwise testing, when we only test the software on combinations of pairs of different inputs. In this paper, we provide a possible theoretical explanation for the corresponding empirical data.

File in pdf


Technical Report UTEP-CS-16-67, September 2016
Updated Version UTEP-CS-16-67a, September 2016
Metric Spaces Under Interval Uncertainty: Towards an Adequate Definition
Mahdokht Afravi, Vladik Kreinovich, and Thongchai Dumrongpokaphoan

Published in Proceedings of the 15th Mexican International Conference on Artificial Intelligence MICAI'2016, Cancun, Mexico, October 25-29, 2016, Springer Lecture Notes in Artificial Intelligence.

In many practical situations, we only know the bounds on the distances. A natural question is: knowing these bounds, can we check whether there exists a metric whose distance always lie within these bounds -- or such a metric is not possible and thus, the bounds are inconsistent. In this paper, we provide an answer to this question. We also describe possible applications of this result to a description of opposite notions in commonsense reasoning.

Original file UTEP-CS-16-67 in pdf
Updated version UTEP-CS-16-67a in pdf


Technical Report UTEP-CS-16-66, September 2016
Preliminaries to a Study of Stance in News Broadcasts
Nigel G. Ward

Aspects of stance have significant potential for information retrieval and filtering. This technical report is about stance in radio news broadcast, and is intended primarily to motivate and document details of the data and annotations used in the work reported in Inferring Stance from Prosody (Ward et al., 2016). It describes the process of identifying 14 important aspects of stance, describes two corpora for investigating stance, describes the annotation of those corpora, presents some preliminary observations, and lists a set of useful prosodic features.

File in pdf


Technical Report UTEP-CS-16-65, September 2016
For Multi-Interval-Valued Fuzzy Sets, Centroid Defuzzification Is Equivalent to Defuzzifying Its Interval Hull: A Theorem
Vladik Kreinovich and Songsak Sriboonchitta

Published in Proceedings of the 15th Mexican International Conference on Artificial Intelligence MICAI'2016, Cancun, Mexico, October 25-29, 2016, Springer Lecture Notes in Artificial Intelligence.

In the traditional fuzzy logic, the expert's degree of certainty in a statement is described either by a number from the interval [0,1] or by a subinterval of such an interval. To adequately describe the opinion of several experts, researchers proposed to use a union of the corresponding sets -- which is, in general, more complex than an interval. In this paper, we prove that for such set-valued fuzzy sets, centroid defuzzification is equivalent to defuzzifying its interval hull.

As a consequence of this result, we prove that the centroid defuzzification of a general type-2 fuzzy set can be reduced to the easier-to-compute case when for each x, the corresponding fuzzy degree of membership is convex.

File in pdf


Technical Report UTEP-CS-16-64, September 2016
Computability of the Avoidance Set and of the Set-Valued Identification Problem
Anthony Welte, Luc Jaulin, Martine Ceberio, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 129-136.

In some practical situations, we need to find the {\em avoidance set}, i.e., the set of all initial states for which the system never goes into the forbidden region. Algorithms are known for computing the avoidance set in several practically important cases. In this paper, we consider a general case, and we show that, in some reasonable sense, the corresponding general problem is always algorithmically solvable. A similar algorithm is possible for another general system-related problem: the problem of describing the set of all possible states which are consistent with the available measurement results.

File in pdf


Technical Report UTEP-CS-16-63, September 2016
HifoCap: An Android App for Wearable Health Devices
Yoonsik Cheon and Rodrigo Romero

Android is becoming a platform for mobile health-care devices and apps. However, there are many challenges in developing soft real-time, health-care apps for non-dedicated mobile devices such as smartphones and tablets. In this paper we share our experiences in developing the HifoCap app, a mobile app for receiving electroencephalogram (EEG) wave samples from a wearable device, visualizing the received EEG samples, and transmitting them to a cloud storage server. The app is network and data-intensive. We describe the challenges we faced while developing the HifoCap app---e.g., ensuring the soft real-time requirement in the presence of uncertainty on the Android platform---along with our solutions to them. We measure both the time and space efficiency of our app and evaluate the effectiveness of our solutions quantitatively. We believe our solutions to be applicable to other soft real-time apps targeted for non-dedicated Android devices.

File in pdf


Technical Report UTEP-CS-16-62, September 2016
Updated version UTEP-CS-16-62a, June 2017
Decision Making Under Interval Uncertainty as a Natural Example of a Quandle
Mahdokht Afravi and Vladik Kreinovich

Published in Reliable Computing, 2017, Vol. 25, pp. 8-14.

In many real-life situations, we need to select an alternative from a set of possible alternatives. In many such situations, we have a well-defined objective function u(a) that describes our preferences. If we know the exact value of u(a) for each alternative a, then we select the alternative with the largest value of u(a). In practice, however, we usually know the consequences of each decision $a$ only with some uncertainty. As a result, for each alternative $a$, instead of the exact utility value $u(a)$, we only know the interval of possible values. In this paper, we show that the resulting problem of decision making under interval uncertainty is a natural example of a quandle, i.e., of a general class of operations introduced in knot theory.

Original file UTEP-CS-16-62 in pdf
Updated version UTEP-CS-16-62a in pdf


Technical Report UTEP-CS-16-61, August 2016
Z-Numbers and Type-2 Fuzzy Sets: A Representation Result
Rafik Aliev and Vladik Kreinovich

To appear in Intelligent Automation and Soft Computing, 2017, Vol. 23, No. 4.

Traditional [0,1]-based fuzzy sets were originally invented to describe expert knowledge expressed in terms of imprecise ("fuzzy") words from natural language. To make this description more adequate, several generalizations of the traditional [0,1]-based fuzzy sets have been proposed, among them type-2 fuzzy sets and Z-numbers. The main objective of this paper is to study the relation between these two generalizations. As a result of this study, we show that if we apply data processing to Z-numbers, then we get type-2 sets of special type -- that we call monotonic. We also prove that every monotonic type-2 fuzzy set can be represented as a result of applying an appropriate data processing algorithm to some Z-numbers.

File in pdf


Technical Report UTEP-CS-16-60, August 2016
A classification of representable t-norm operators for picture fuzzy sets
Bui Cong Cuong, Vladik Kreinovich, and Roan Thi Ngan

Published in Proceedings of the Eighth International Conference on Knowledge and Systems Engineering KSE'2016, Hanoi, Vietnam, October 6-8, 2016

Abstract—T-norms and t-conorms are basic operators of fuzzy logics. The classifications of these operators are significant problems. Some results of the classifications of fuzzy logics operators for fuzzy sets are known. In 2013, we defined the picture fuzzy sets, and in 2015 some representable t-norms operators and t-conorms operators were defined. In this paper, we investigate the classification of representable picture t-norms and picture t-conorms operators for picture fuzzy sets.

File in pdf


Technical Report UTEP-CS-16-59, August 2016
Updated version UTEP-CS-16-59a, October 2016
Concepts of solutions of uncertain equations with intervals, probabilities and fuzzy sets for applied tasks
Boris Kovalerchuk and Vladik Kreinovich

Published in Granular Computing, 2017, Vol. 2, No. 3, pp. 121-130.

The focus of this paper is to clarify the concepts of solutions of linear equations in interval, probabilistic, and fuzzy sets setting for real world tasks. There is a fundamental difference between formal definitions of the solutions and physically meaningful concepts of solution in applied tasks, when equations have uncertain components. For instance, a formal definition of the solution in terms of Moore interval analysis can be completely irrelevant for solving a real world task. We show that formal definitions must follow a meaningful concept of the solution in the real world. The paper proposed several formalized definitions of the concept of solution for the linear equations with uncertain components in the interval, probability and fuzzy set terms that can be interpreted in the real world tasks. The proposed concepts of solutions generalized for difference and differential equations under uncertainty.

Original file UTEP-CS-16-59 in pdf
Updated file UTEP-CS-16-59a in pdf


Technical Report UTEP-CS-16-58, August 2016
Why Hausdorff Distance Is Natural in Interval Computations
Olga Kosheleva and Vladik Kreinovich

Several different metrics have been proposed to describe distance between intervals and, more generally, between compact sets. In this paper, we show that from the viewpoint of interval computations, the most adequate distance is the Hausdorff distance dH(A,A') -- the smallest value ε > 0 for which every element a from the set A is ε-close to some element a' from the ser A', and every element a' from the set A' is ε-close to some element a of the set A.

File in pdf


Technical Report UTEP-CS-16-57, August 2016
Revised version UTEP-CS-16-57a, April 2017
The Range of a Continuous Functional Under Set-Valued Uncertainty Is Always an Interval
Vladik Kreinovich and Olga Kosheleva

Published in Relaible Computing, 2017, Vol. 24, pp. 27-30.

One of the main problems of interval computations is computing the range of a given function on a given multi-D interval (box). It is known that the range of a continuous function on a box is always an interval. However, if, instead of a box, we consider the range over a subset of this box, the range is, in general, no longer an interval. In some practical situations, we are interested in computing the range of a functional over a function defined with interval (or, more general, set-valued) uncertainty. At first glance, it may seem that under a non-interval set-valued uncertainty, the range of the functional may be different from an interval. However, somewhat surprisingly, we show that for continuous functionals, this range is always an interval.

Original file UTEP-CS-16-57 in pdf
Updated version UTEP-CS-16-57a in pdf


Technical Report UTEP-CS-16-56, August 2016
Updated version UTEP-CS-16-56a, October 2016
Avoiding Fake Boundaries in Set Interval Computing
Anthony Welte, Luc Jaulin, Martine Ceberio, and Vladik Kreinovich

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 1, pp. 137-148.

Set intervals techniques are an efficient way of dealing with uncertainty in spatial localization problems. In these techniques, the desired set (e.g., set of possible locations) is represented by an expression that uses intersection, union, and complement of input sets -- which are usually only known with interval uncertainty. To find the desired set, we can, in principle, perform the corresponding set-interval computations one-by-one. However, the estimates obtained by such straightforward computations often contain extra elements -- e.g., fake boundaries. In this paper, we show that we can eliminate these fake boundaries (and other extra elements) if we first transform the original set expression into an appropriate DNF/CNF form.

Original file UTEP-CS-16-56 in pdf
Updated version UTEP-CS-16-56a in pdf


Technical Report UTEP-CS-16-55, August 2016
Updated version UTEP-CS-16-55a, October 2016
Robust Data Processing in the Presence of Uncertainty and Outliers: Case of Localization Problems
Anthony Welte, Luc Jaulin, Martine Ceberio, and Vladik Kreinovich

Published in Proceedings of the IEEE Series of Symposia in Computational Intelligence SSCI'2016, Athens, Greece, December 6-9, 2016.

To properly process data, we need to take into account both the measurement errors and the fact that some of the observations may be outliers. This is especially important in radar-based localization problems, where some signals may reflect not from the analyzed object, but from some nearby object. There are known methods for dealing with both measurement errors and outliers in situations in which we have full information about the corresponding probability distributions. There are also known statistics-based methods for dealing with measurement errors in situations when we only have partial information about the corresponding probabilities. In this paper, we show how these methods can be extended to situations in which we also have partial inf0ormation about the outliers (and even to situations when we have no information about the outliers). In some situations in which efficient semi-heuristic methods are known, our methodology leads to a justification of these efficient heuristics -- which makes us confident that our new methods will be efficient in other situations as well.

Original file UTEP-CS-16-55 in pdf
Updated version UTEP-CS-16-55a in pdf


Technical Report UTEP-CS-16-54, August 2016
One Needs to Be Careful When Dismissing Outliers: A Realistic Example
Carlos Fajardo, Olga Kosheleva, and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 205-214.

Traditional approach to eliminating outliers is that we compute the sample mean μ and the sample standard deviation σ, and then, for an appropriate value k0 = 2, 3, 6, etc., we eliminate all data points outside the interval [μ − k0 * σ, μ + k0 * σ] as outliers. Then, we repeat this procedure with the remaining data, eliminate new outliers, etc., until on some iteration, no new outliers are eliminated. In many applications, this procedure works well. However, in this paper, we provide a realistic example in which this procedure, instead of eliminating all outliers and leaving adequate data points intact, eliminates all the data points. This example shows that one needs to be careful when applying the standard outlier-eliminating procedure.

File in pdf


Technical Report UTEP-CS-16-53, August 2016
How to Determine the Stiffness of the Pavement's Upper Layer (Base) Based on the Overall Stiffness and the Stiffness of the Lower Layer (Subgrade)
Christian Servin and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 193-203.

In road construction, it is important to estimate difficult-measure stiffness of the pavement's upper layer based the easier-to-measure overall stiffness and the stiffness of the lower layer. In situations when the overall stiffness is not yet sufficient, it is also important to estimate how much more we need to add to the upper layer to reach the desired overall stiffness. In this paper, for the cases when a linear approximation is sufficient, we provide analytical formulas for the desired estimations.

File in pdf


Technical Report UTEP-CS-16-52, July 2016
Why Ragin's Fuzzy Techniques Lead to Successful Social Science Applications: An Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 185-192.

To find the relation between two concepts, social scientists traditionally look for correlations between the numerical quantities describing these concepts. Sometimes this help, but sometimes, while we are clear that there is a relation, statistical analysis does not show any correlation. Charles Ragin has shown that often, in such situations, we can find statistically significant correlation between the degrees to which experts estimate the corresponding concepts to be applicable to given situations. In this paper, we provide a simple explanation for this empirical success.

File in pdf


Technical Report UTEP-CS-16-51, July 2016
Which Interval Is the Closest to a Given Set?
Vladik Kreinovich and Olga Kosheleva

In some practical situations, we know a set of possible values of a physical quantity -- a set which is not an interval. Since computing with sets is often complicated, it is desirable to approximate this set by an easier-to-process set: namely, with an interval. In this paper, we describe intervals which are the closest approximations to a given set.

File in pdf


Technical Report UTEP-CS-16-50, July 2016
How to Make Plausibility-Based Forecasting More Accurate
Kongliang Zhu, Nantiworn Thianpaen, and Vladik Kreinovich

Published in: Vladik Kreinovich, Songsak Sriboonchitta, and Van Nam Huynh (eds.), Robustness in Econometrics, Springer Verlag, Cham, Switzerland, 2017, pp. 99-110.

In recent papers, a new plausibility-based forecasting method was proposed. While this method has been empirically successful, one of its steps -- selecting a uniform probability distribution for the plausibility level -- is heuristic. It is therefore desirable to check whether this selection is optimal or whether a modified selection would like to a more accurate forecast. In this paper, we show that the uniform distribution does not always lead to (asymptotically) optimal estimates, and we show how to modify the uniform-distribution step so that the resulting estimates become asymptotically optimal.

File in pdf


Technical Report UTEP-CS-16-49, July 2016
When We Know the Number of Local Maxima, Then We Can Compute All of Them
Olga Kosheleva, Martine Ceberio, and Vladik Kreinovich

Published in Proceedings of the Ninth International Workshop on Constraints Programming and Decision Making CoProd'2016, Uppsala, Sweden, September 25, 2016.

In many practical situations, we need to compute local maxima. In general, it is not algorithmically possible, given a computable function, to compute the locations of all its local maxima. We show, however, that if we know the number of local maxima, then such an algorithm is already possible. Interestingly, for global maxima, the situation is different: even if we only know the number of locations where the global maximum is attained, then, in general, it is not algorithmically possible to find all these locations. A similar impossibility result holds for local maxima if instead of knowing their exact number, we only know two possible numbers.

File in pdf


Technical Report UTEP-CS-16-48, July 2016
Interpolation Sometimes Enhances and Sometimes Impedes Spatial Correlation: Simple Pedagogical Examples
Olga Kosheleva and Vladik Kreinovich

Published in Journal of Innovative Technology and Education, 2016, Vol. 3, No. 1, pp. 79-84.

A natural way to check whether there is a dependence between two quantities is to estimate their correlation. For spatial quantities, such an estimation is complicated by the fact that, in general, we measure the values of the two quantities of interest in somewhat different locations. In this case, one possibility is to correlate each value of the first quantity with the value of the second quantity measured at a nearby point. An alternative idea is to first apply an appropriate interpolation to each of the quantities, and then look for the correlation between the resulting spatial maps. Empirical results show that sometimes one of these techniques leads to a larger correlation, and sometimes the other one. In this paper, we provide simple pedagogical examples explaining why sometimes interpolation enhances spatial correlation and sometimes interpolation impedes correlation.

File in pdf


Technical Report UTEP-CS-16-47, July 2016
Geometric Symmetries Partially Explain Why Some Paleolithic Signs Are More Frequent
Olga Kosheleva and Vladik Kreinovich

Published in Geombinatorics, 2017, Vol. 26, No. 4, pp. 141-148.

A recent analysis of Paleolithic signs have described which signs are more frequent and which are less frequent. In this paper, we show that this relative frequency can be (at least partially) explained by the symmetries of the signs: in general, the more symmetries, the more frequent the sign.

File in pdf


Technical Report UTEP-CS-16-46, July 2016
How Neural Networks (NN) Can (Hopefully) Learn Faster by Taking Into Account Known Constraints
Chitta Baral, Martine Ceberio, and Vladik Kreinovich

Published in Proceedings of the Ninth International Workshop on Constraints Programming and Decision Making CoProd'2016, Uppsala, Sweden, September 25, 2016.

Neural networks are a very successful machine learning technique. At present, deep (multi-layer) neural networks are the most successful among the known machine learning techniques. However, they still have some limitations, One of their main limitations is that their learning process still too slow. The major reason why learning in neural networks is slow is that neural networks are currently unable to take prior knowledge into account. As a result, they simple ignore this knowledge and simulate learning "from scratch". In this paper, we show how neural networks can take prior knowledge into account and thus, hopefully, learn faster.

File in pdf


Technical Report UTEP-CS-16-45, July 2016
How to Explain Ubiquity of Constant Elasticity of Substitution (CES) Production and Utility Functions Without Explicitly Postulating CES
Olga Kosheleva, Vladik Kreinovich, and Thongchai Dumrongpokaphan

Published in: Vladik Kreinovich, Songsak Sriboonchitta, and Van Nam Huynh (eds.), Robustness in Econometrics, Springer Verlag, Cham, Switzerland, 2017, pp. 85-98.

In many situations, the dependence of the production or utility on the corresponding factors is described by the CES (Constant Elasticity of Substitution) functions. These functions are usually explained by postulating two requirements: an economically reasonable postulate of homogeneity (that the formulas should not change if we change a measuring unit) and a less convincing CSE requirement. In this paper, we show that the CES requirement can be replaced by a more convincing requirement -- that the combined effect of all the factors should not depend on the order in which we combine these factors.

File in pdf


Technical Report UTEP-CS-16-44, July 2016
Why 3-D Space? Why 10-D Space? A Possible Simple Geometric Explanation
Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 40, pp. 55-58.

In physics, the number of observed spatial dimensions (three) is usually taken as an empirical fact, without a deep theoretical explanation. In this paper, we provide a possible simple geometric explanation for the 3-D character of the proper space. We also provide a simple geometric explanation for the number of additional spatial dimensions that some physical theories use. Specifically, it is known that for some physical quantities, the 3-D space model with point-wise particles leads to meaningless infinities. To avoid these infinities, physicists have proposed that particles are more adequately described not as 0-D points, but rather as 1-D strings or, more generally, as multi-D "M-branes". In the corresponding M-theory, proper space is 10-dimensional. We provide a possible geometric explanation for the 10-D character of the corresponding space.

File in pdf


Technical Report UTEP-CS-16-43, July 2016
Econometric Models of Probabilistic Choice: Beyond McFadden's Formulas
Olga Kosheleva, Vladik Kreinovich, and Songsak Sriboonchitta

Published in: Vladik Kreinovich, Songsak Sriboonchitta, and Van Nam Huynh (eds.), Robustness in Econometrics, Springer Verlag, Cham, Switzerland, 2017, pp. 79-88.

Traditional decision theory assumes that for every two alternatives, people always make the same (deterministic) choice. In practice, people's choices are often probabilistic, especially for similar alternatives: the same decision maker can sometimes select one of them and sometimes the other one. In many practical situations, an adequate description of this probabilistic choice can be provided by a logit model proposed by 2001 Nobelist D. McFadden. In this model, the probability of selecting an alternative a is proportional to exp(β * u(a)), where u(a) is the alternative's utility. Recently, however, empirical evidence appeared that shows that in some situations, we need to go beyond McFadden's formulas. In this paper, we use natural symmetries to come up with an appropriate generalization of McFadden's formulas.

File in pdf


Technical Report UTEP-CS-16-42, July 2016
Why Cannot We Have a Strongly Consistent Family of Skew Normal (and Higher Order) Distributions
Thongchai Dumrongpokaphan and Vladik Kreinovich

Published in: Vladik Kreinovich, Songsak Sriboonchitta, and Van Nam Huynh (eds.), Robustness in Econometrics, Springer Verlag, Cham, Switzerland, 2017, pp. 69-78.

In many practical situations, the only information that we have about the probability distribution is its first few moments. Since many statistical techniques requires us to select a single distribution, it is therefore desirable to select, out of all possible distributions with these moments, a single "most representative" one. When we know the first two moments, a natural idea is to select a normal distribution. This selection is strongly consistent in the sense that if a random variable is a sum of several independent ones, then selecting normal distribution for all of the terms in the sum leads to a similar normal distribution for the sum. In situations when we know three moments, there is also a widely used selection -- of the so-called skew-normal distribution. However, this selection is not strongly consistent in the above sense. In this paper, we show that this absence of strong consistency is not a fault of a specific selection but a general feature of the problem: for third and higher order moments, no strongly consistent selection is possible.

File in pdf


Technical Report UTEP-CS-16-41, July 2016
Diversity & Computing Workforce Success: Changing Business as Usual
Report on the Roundtable at the Computing Alliance for Hispanic-Serving Institutions (CAHSI) Summit, San Juan, Puerto Rico, September 12, 2015
Ann Q. Gates, Claudia Casas, and Andrea Tirres

File in pdf


Technical Report UTEP-CS-16-40, June 2016
Why Compaction Meter Value (CMV) Is a Good Measure of Pavement Stiffness: Towards a Possible Theoretical Explanation
Andrzej M. Pownuk, Pedro Barragan Olague, and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 40, pp. 48-54.

To measure stiffness of the compacted pavement, practitioners use the Compaction Meter Value (CMV); a ratio between the amplitude for the first harmonic of the compactor's acceleration and the amplitude corresponding to the vibration frequency. Numerous experiments show that CMV is highly correlated with the pavement stiffness, but as of now, there is no convincing theoretical explanation for this correlation. In this paper, we provide a possible theoretical explanation for the empirical correlation. This explanation also explains why, the stiffer the material, the more higher-order harmonics we observe.

File in pdf


Technical Report UTEP-CS-16-39, June 2016
What Will Make Computers Faster: An Approach Based on Computational Complexity
Vladik Kreinovich

To appear in International Journal of Unconventional Computing

File in pdf


Technical Report UTEP-CS-16-38, June 2016
Computers of Generation Omega and the Future of Computing
Vladik Kreinovich

File in pdf


Technical Report UTEP-CS-16-37, June 2016
George Klir and the Great Chain of Ideas
Vladik Kreinovich

File in pdf


Technical Report UTEP-CS-16-36, June 2016
How Resilient Modulus of a Pavement Depends on Moisture Level: Towards a Theoretical Justification of a Practically Important Empirical Formula
Pedro Barragan Olague, Olga Kosheleva, and Vladik Kreinovich

To appear in Proceedings of the 2016 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2016, El Paso, Texas, October 31 - November 4, 2016.

Resilient modulus is a mechanical characteristic describing the stiffness of a pavement. Its value depends on the moisture level. In pavement construction, it is important to be able, knowing the resilient modulus corresponding to one moisture level, to predict resilient modulus corresponding to other moisture levels. There exists an empirical formula for this prediction. In this paper, we provide a possible theoretical explanation for this empirical formula.

File in pdf


Technical Report UTEP-CS-16-35, June 2016
Which Point From an Interval Should We Choose?
Andrzej Pownuk and Vladik Kreinovich

To appear in Proceedings of the 2016 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2016, El Paso, Texas, October 31 - November 4, 2016.

In many practical situations, we know the exact form of the objective function, and we know the optimal decision corresponding to each value of the corresponding parameter x. What should we do if we do not know the exact value of x, and instead, we only know x with uncertainty -- e.g., with interval uncertainty? In this case, a reasonable idea is to select one value from the given interval, and to use the optimal decision corresponding to the selected value. But which value should we choose? In this paper, we provide a solution to this problem for the situation in the simplest 1-D case. Somewhat surprisingly, it turns out the usual practice of selecting the midpoint is rarely optimal, a better selection is possible.

File in pdf


Technical Report UTEP-CS-16-34, June 2016
What If We Use Different "And"-Operations in the Same Expert System
Mahdokht Afravi and Vladik Kreinovich

To appear in Proceedings of the 2016 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2016, El Paso, Texas, October 31 - November 4, 2016.

In expert systems, we often face a problem of estimating the expert's degree of confidence in a composite statement A & B based on the known expert's degrees of confidence a = d(A) and b = d(B) in individual statements A and B. The corresponding estimate f&(a,b) is sometimes called an "and"-operation. Traditional fuzzy logic assumes that the same "and"-operation is applied to all pairs of statements. In this case, it is reasonable to justify that the "and"-operation be associative; such "and"-operations are known as t-norms. In practice, however, in different areas, different "and"-operations provide a good description of expert reasoning. As a result, when we combine expert knowledge from different areas into a single expert system, it is reasonable to use different "and"-operations to combine different statements. In this case, associativity is no longer a natural requirement. We show, however, that in such situations, under some reasonable conditions, associativity of each "and"-operation can still be deduced. Thus, in this case, we can still use associative t-norms.

File in pdf


Technical Report UTEP-CS-16-33, June 2016
Towards the Most Robust Way of Assigning Numerical Degrees to Ordered Labels, With Possible Applications to Dark Matter and Dark Energy
Olga Kosheleva, Vladik Kreinovich, Martha Osegueda Escobar, and Kimberly Kato

To appear in Proceedings of the 2016 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2016, El Paso, Texas, October 31 - November 4, 2016.

Experts often describe their estimates by using words from natural language, i.e., in effect, sorted labels. To efficiently represent the corresponding expert knowledge in a computer-based system, we need to translate these labels into a computer-understandable language, i.e., into numbers. There are many ways to translate labels into numbers. In this paper, we propose to select a translation which is the most robust, i.e., which preserves the order between the corresponding numbers under the largest possible deviations from the original translation. The resulting formulas are in good accordance with the translation coming from the Laplace's principle of sufficient reason, and -- somewhat surprisingly -- with the current estimates of the proportion of dark matter and dark energy in our Universe.

File in pdf


Technical Report UTEP-CS-16-32, June 2016
Updated version UTEP-CS-16-32b, September 2016
Final version UTEP-CS-16-32c, September 2016
How to Select an Appropriate Similarity Measure: Towards a Symmetry-Based Approach
Ildar Batyrshin, Thongchai Dumrongpokaphan, Vladik Kreinovich, and Olga Kosheleva

Published in Proceedings of the 5th International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making IUKM'2016, Da Nang, Vietnam, November 30 - December 2, 2016, pp. 457-468.

When practitioners analyze the similarity between time series, they often use correlation to gauge this similarity. Sometimes this works, but sometimes, this leads to counter-intuitive results, in which case other similarity measures are more appropriate. An important question is how to select an appropriate similarity measures. In this paper, we show, on simple examples, that the use of natural symmetries -- scaling and shift -- can help with such a selection.

Original file UTEP-CS-16-32 in pdf
Updated version UTEP-CS-16-32b in pdf
Final version UTEP-CS-16-32c in pdf


Technical Report UTEP-CS-16-31, June 2016
Analysis of the Execution Time Variation of OpenMP-based Applications on the Intel Xeon Phi
Roberto Camacho Barranco and Patricia J. Teller

The Intel Xeon Phi accelerator is currently being used in several large-scale computer clusters and supercomputers to enhance the execution-time performance of computation-intensive applications. While performing a comprehensive profiling of the Intel Xeon Phi execution-time behavior of different applications included in the Rodinia Benchmark suite, we observed large variations in application execution times. In this report we present the average execution times for different runs of each application. In addition, we describe the different steps taken to try to solve this problem.

For example, a brief study was performed using one of these applications, i.e., a matrix-multiply kernel. By improving the vectorization of this application, the variation was reduced from an average of 25% to an average of 10%. However, the root cause of the remaining variation was not identified. Because the execution times of the other applications also exhibit similar levels of variation, we hypothesize that this execution-time variation could be caused by the hardware or by performance issues associated with how OpenMP is utilized.

File in pdf


Technical Report UTEP-CS-16-30, May 2016
Empirically Successful Transformations from Non-Gaussian to Close-to-Gaussian Distributions: Theoretical Justification
Thongchai Dumrongpokaphan, Pedro Barragan, and Vladik Kreinovich

Published in Thai Journal of Mathematics, 2016, Special Issue on Applied Mathematics: Bayesian Econometrics, pp. 51-61.

A large number of efficient statistical methods have been designed for a frequent case when the distributions are normal (Gaussian). In practice, many probability distributions are not normal. In this case, Gaussian-based techniques cannot directly applied. In many cases, however, we can apply these techniques indirectly -- by first applying an appropriate transformation to the original variables, after which their distribution becomes close to normal. Empirical analysis of different transformations has shown that the most successful are the power transformations X → Xh and their modifications. In this paper, we provide a symmetry-based explanation for this empirical success.

File in pdf


Technical Report UTEP-CS-16-29, May 2016
Updated version UTEP-CS-16-29b, September 2016
Need for Most Accurate Discrete Approximations Explains Effectiveness of Statistical Methods Based on Heavy-Tailed Distributions
Songsak Sriboonchitta, Vladik Kreinovich, Olga Kosheleva, and Hung T. Nguyen

Published in Proceedings of the 5th International Symposium on Integrated Uncertainty in Knowledge Modelling and Decision Making IUKM'2016, Da Nang, Vietnam, November 30 - December 2, 2016, pp. 523-531.

In many practical situations, it is effective to use statistical methods based on Gaussian distributions, and, more generally, distribution for which tails are light -- in the sense that as the value increases, the corresponding probability density tends to 0 very fast. There are many theoretical explanations for this effectiveness. On the other hand, in many other cases, it is effective to use statistical methods based on heavy-tailed distributions, in which the probability density is asymptotically described, e.g., by a power law. In contrast to the light-tailed distributions, there is no convincing theoretical explanation for the effectiveness of the heavy-tail-based statistical methods. In this paper, we provide such a theoretical explanation. This explanation is based on the fact that in many applications, we approximate a continuous distribution by a discrete one. From this viewpoint, it is desirable, among all possible distributions which are consistent with our knowledge, to select a distribution for which such an approximation is the most accurate. It turns out that under reasonable conditions, this requirement (of allowing the most accurate discrete approximation) indeed leads to the statistical methods based on the power-law heavy-tailed distributions.

Original file UTEP-CS-16-29 in pdf
Updated version UTEP-CS-16-29b in pdf


Technical Report UTEP-CS-16-28, May 2016
Bayesian Approach to Intelligent Control and Its Relation to Fuzzy Control
Kongliang Zhu, Vladik Kreinovich, and Olga Kosheleva

Published in Thai Journal of Mathematics, 2016, Special Issue on Applied Mathematics: Bayesian Econometrics, pp. 25-36.

In many application areas including economics, experts describe their knowledge by using imprecise ("fuzzy") words from natural language. To design an automatic control system, it is therefore necessary to translate this knowledge into precise computer-understandable terms. To perform such a translation, a special semi-heuristic fuzzy methodology was designed. This methodology has been successfully applied to many practical problem, but its semi-heuristic character is a big obstacle to its use: without a theoretical justification, we are never 100% sure that this methodology will be successful in other applications as well. It is therefore desirable to come up with either a theoretical justification of exactly this methodology, or with a theoretically justified modification of this methodology. In this paper, we apply the Bayesian techniques to the above translation problem, and we analyze when the resulting methodology is identical to fuzzy techniques -- and when it is different.

File in pdf


Technical Report UTEP-CS-16-27, May 2016
Big Data: A Geometric Explanation of a Seemingly Counterintuitive Strategy
Olga Kosheleva and Vladik Kreinovich

Published in Geombinatorics, 2016, Vol. 26, No. 2, pp. 71-79.

Traditionally, the progress in science was usually achieved by gradually modifying known problem-solving techniques -- so that the modified techniques can solve problems similar to the already-solved ones. Recently, however, a different -- successful -- paradigm of big data appeared. In the big data paradigm, we, in contrast, look for problems which cannot be solved by gradual modifications of the existing methods. In this paper, we propose a geometric explanation for the empirical success of this new paradigm.

File in pdf


Technical Report UTEP-CS-16-26, May 2016
How to Make Testing and Grading Non-Confrontational: Towards Applying Loving Kindness to Testing and Grading
Francisco Zapata and Olga Kosheleva

Published in Journal of Uncertain Systems, 2017, Vol. 11, No. 2, pp. 149-153.

Students and teachers have a common goal: that the students successfully learn all the required material. At first glance, the existence of a common goal should result in a conflict-free environment. However, in practice, the current education process has become very confrontational, especially during testing and grading: students come up with more and more sophisticated ways of cheating, while instructors use more and more complex tools to detect this cheating. Do we need to continue this arms race? Would it not be better to use the ideas of loving kindness and come up with a conflict-free teaching environment? In this paper, we analyze the reasons for the current conflicts, and we use this analyze to come up with a non-confrontational way of performing testing and grading, a way that we actually tested in a class.

File in pdf


Technical Report UTEP-CS-16-25, April 2016
Updated version UTEP-CS-16-25a, May 2016
Which Robust Versions of Sample Variance and Sample Covariance Are Most Appropriate for Econometrics: Symmetry-Based Analysis
Songsak Sriboonchitta, Ildar Batyrshin, and Vladik Kreinovich

Published in Thai Journal of Mathematics, 2016, Special Issue on Applied Mathematics: Bayesian Econometrics, pp. 37-50.

In many practical situations, we do not know the shape of the corresponding probability distributions and therefore, we need to use robust statistical techniques, i.e., techniques that are applicable to all possible distributions. Empirically, it turns out the the most efficient robust version of sample variance is the average value of the p-th powers of the deviations |xi- a| from the (estimated) mean a. In this paper, we use natural symmetries to provide a theoretical explanation for this empirical success, and to show how this optimal robust version of sample variance can be naturally extended to a robust version of sample covariance.

Original file UTEP-CS-16-25 in pdf
Updated version UTEP-CS-16-25a in pdf


Technical Report UTEP-CS-16-24, April 2016
Updated version UTEP-CS-16-24a, June 2016
Fuzzy-Inspired Hierarchical Version of the von~Neumann-Morgenstern Solutions as a Natural Way to Resolve Collaboration-Related Conflicts
Olga Kosheleva, Vladik Kreinovich, and Martha Osegueda Escobar

Published in Proceedings of International IEEE Conference on Systems, Man, and Cybernetics SMC'2016, Budapest, Hungary, October 9-12, 2016.

In situations when several participants collaborate with each other, it is desirable to come up with a fair way to divide the resulting gain between the participants. Such a fair way was proposed by John von Neumann and Oscar Morgenstern, fathers of the modern game theory. However, in some situations, the von Neumann-Morgenstern solution does not exist. To cover such situations, we propose to use a fuzzy-inspired hierarchical version of the von Neumann-Morgenstern (NM) solution. We prove that, in contrast to the original NM solution, the hierarchical version always exists.

Original file UTEP-CS-16-24 in pdf
Updated version UTEP-CS-16-24a in pdf


Technical Report UTEP-CS-16-23, April 2016
Updated version UTEP-CS-16-23a, July 2016
Rotation-Invariance Can Further Improve State-of-the-Art Blind Deconvolution Techniques
Fernando Cervantes, Bryan Usevitch, and Vladik Kreinovich

Published in Proceedings of International IEEE Conference on Systems, Man, and Cybernetics SMC'2016, Budapest, Hungary, October 9-12, 2016.

In many real-life situations, we need to reconstruct a blurred image in situations when no information about the blurring is available. This problem is known as the problem of blind deconvolution. There exist techniques for solving this problem, but these techniques are not rotation-invariant. Thus, the result of using this technique may change with rotation. So, if we rotate the image a little bit, the method, in general, leads to a different deconvolution result. Therefore, even when the original reconstruction is optimal, the reconstruction of a rotated image will be different and, thus, not optimal. To improve the quality of image decomposition, it is desirable to modify the current state-of-the art techniques by making them rotation-invariant. In this paper, we show how this can be done, and we show that this indeed improves the quality of blind deconvolution.

Original file UTEP-CS-16-23 in pdf
Updated version UTEP-CS-16-23a in pdf


Technical Report UTEP-CS-16-22, April 2016
Updated version UTEP-CS-16-22a, June 2016
How to Transform Partial Order Between Degrees into Numerical Values
Olga Kosheleva, Vladik Kreinovich, Joe Lorkowski, and Martha Osegueda

Published in Proceedings of International IEEE Conference on Systems, Man, and Cybernetics SMC'2016, Budapest, Hungary, October 9-12, 2016.

Fuzzy techniques are a successful way to handle expert knowledge, enabling us to capture different degrees of expert's certainty in their statements. To use fuzzy techniques, we need to describe expert's degree of certainty in numerical terms. Some experts can provide such numbers, but others can only describe their degrees by using natural-language words like "very", "somewhat", "to some extent", etc. In general, all we know about these word-valued degrees is that there is a natural partial order between these degrees: e.g., "very small" is clearly smaller than "somewhat small". In this paper, we propose a natural way to transform such a partial order between degrees into numerical values.

Original file UTEP-CS-16-22 in pdf
Updated version UTEP-CS-16-22a in pdf


Technical Report UTEP-CS-16-21, April 2016
How to Introduce Technical Details of Quantum Computing in a Theory of Computation Class: Using the Basic Case of the Deutsch-Jozsa Algorithm
Olga Kosheleva and Vladik Kreinovich

Published in International Journal of Computing and Optimization, 2016, Vol. 3, No. 1, pp. 83-91.

Many students taking the theory of computation class have heard about quantum computing and are curious about it. However, the usual technical description of quantum computing requires a large amount of preliminary information, too much to fit into an already packed class. In this paper, we propose a way to introduce technical details of quantum computing that does not require much time -- it can be described in less than an hour. As such an introduction, we use a simplified description of the basic case of one of the pioneering algorithms of quantum computing.

File in pdf


Technical Report UTEP-CS-16-20, March 2016
Updated version UTEP-CS-16-20a, May 2016
How to Estimate Amount of Useful Information, in Particular Under Imprecise Probability
Luc Longpre, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 7th International Workshop on Reliable Engineering Computing REC'2016, Bochum, Germany, June 15-17, 2016, pp. 257-268.

Traditional Shannon's information theory describes the overall amount of information, without distinguishing between useful and unimportant information. Such a distinction is needed, e.g., in privacy protection, where it is crucial to protect important information while it is not that crucial to protect unimportant information. In this paper, we show how Shannon's definition can be modified so that it will describe only the amount of useful information.

Original file UTEP-CS-16-20 in pdf
Updated version UTEP-CS-16-20a in pdf


Technical Report UTEP-CS-16-19, March 2016
Model-Order Reduction Using Interval Constraint Solving Techniques
Leobardo Valera and Martine Ceberio

Published in Proceedings of the 7th International Workshop on Reliable Engineering Computing REC'2016, Bochum, Germany, June 15-17, 2016.

Many natural phenomena can be modeled as ordinary or partial differential equations. A way to find solutions of such equations is to discretize them and to solve the corresponding (possibly) nonlinear large systems of equations.

Solving a large nonlinear system of equations is very computationally complex due to several numerical issues, such as high linear-algebra cost and large memory requirements. Model-Order Reduction (MOR) has been proposed as a way to overcome the issues associated with large dimensions, the most used approach for doing so being Proper Orthogonal Decomposition (POD). The key idea of POD is to reduce a large number of interdependent variables (snapshots) of the system to a much smaller number of uncorrelated variables while retaining as much as possible of the variation in the original variables.

In this work, we show how intervals and constraint solving techniques can be used to compute all the snapshots at once (I-POD). This new process gives us two advantages over the traditional POD method: 1. handling uncertainty in some parameters or inputs; 2. reducing the snapshots computational cost.

File in pdf


Technical Report UTEP-CS-16-18, March 2016
Why Lp-methods in Signal and Image Processing: A Fuzzy-Based Explanation
Fernando Cervantes, Bryan Usevitch, and Vladik Kreinovich

To appear in Proceedings of the 2016 Annual Conference of the North American Fuzzy Information Processing Society NAFIPS'2016, El Paso, Texas, October 31 - November 4, 2016.

In signal and image processing, it is often beneficial to use semi-heuristic Lp-methods, i.e., methods that minimize the sum of the p-th powers of the discrepancies. In this paper, we show that a fuzzy-based analysis of the corresponding intuitive idea leads exactly to the Lp-methods.

File in pdf


Technical Report UTEP-CS-16-17, March 2016
Revised version UTEP-CS-16-17a, May 2016
Limitations of Realistic Monte-Carlo Techniques in Estimating Interval Uncertainty
Andrzej Pownuk, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 7th International Workshop on Reliable Engineering Computing REC'2016, Bochum, Germany, June 15-17, 2016, pp. 269-284.

Because of the measurement errors, the result Y = f(X1, ..., Xn) of processing the measurement results X1, ..., Xn is, in general, different from the value y = f(x1, ..., xn) that we would obtain if we knew the exact values x1, ..., xn of all the inputs. In the linearized case, we can use numerical differentiation to estimate the resulting difference Y -- y; however, this requires >n calls to an algorithm computing f, and for complex algorithms and large $n$ this can take too long. In situations when for each input xi, we know the probability distribution of the measurement error, we can use a faster technique for estimating Y -- y: namely, Monte-Carlo simulation technique. A similar Monte-Carlo technique is also possible for the case of interval uncertainty, but the resulting simulation is not realistic: this technique uses Cauchy distributions which can result in arbitrarily small or arbitrarily large values, while we know that each measurement error Xi -- xi is located within the corresponding interval. In this paper, we prove that this non-realistic character of interval Monte-Carlo simulations is inevitable: namely, that no realistic Monte-Carlo simulation can provide a correct bound for Y -- y.

Original file UTEP-CS-16-17 in pdf
Revised version UTEP-CS-16-17a in pdf


Technical Report UTEP-CS-16-16, March 2016
Updated version UTEP-CS-16-16a, April 2016
How to Estimate Resilient Modulus for Unbound Aggregate Materials: A Theoretical Explanation of an Empirical Formula
Pedro Barragan Olague, Soheil Nazarian, Vladik Kreinovich, and Afshin Gholamy

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 203-207.

To ensure the quality of pavement, it is important to make sure that the resilient moduli -- that describe the stiffness of all the pavement layers -- exceed a certain threshold. From the mechanical viewpoint, pavement is a non-linear medium. Several empirical formulas have been proposed to describe this non-linearity. In this paper, we describe a theoretical explanation for the most accurate of these empirical formulas.

Original file UTEP-CS-16-16 in pdf
Updated version UTEP-CS-16-16a in pdf


Technical Report UTEP-CS-16-15, February 2016
Updated version UTEP-CS-16-15a, April 2016
How to Make a Solution to a Territorial Dispute More Realistic: Taking into Account Uncertainty, Emotions, and Step-by-Step Approach
Mahdokht Afravi and Vladik Kreinovich

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 336-340.

In many real-life situations, it is necessary to divide a disputed territory between several interested parties. The usual way to perform this division is by using Nash's bargaining solution, i.e., by finding a partition that maximizes the product of the participants' utilities. However, this solution is based on several idealized assumptions: that we know the exact values of all the utilities, that division is performed on a purely rational basis, with no emotions involved, and that the entire decision is made once. In practice, we only know the utilities with some uncertainty, emotions are often involved, and the solution is often step-by-step. In this paper, we show how to make a solution to a territorial dispute more realistic by taking all this into account.

Original file UTEP-CS-16-15 in pdf
Updated version UTEP-CS-16-15a in pdf


Technical Report UTEP-CS-16-14, February 2016
Updated version UTEP-CS-16-14a, April 2016
How to Predict Nesting Sites and How to Measure Shoreline Erosion: Fuzzy and Probabilistic Techniques for Environment-Related Spatial Data Processing
Stephen M. Escarzaga, Craig Tweedie, Olga Kosheleva, and Vladik Kreinovich

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 249-252.

In this paper, we show how fuzzy and probabilistic techniques can be used in environment-related data processing. Specifically, we will show that these methods help in solving two environment-related problems: how to predict the birds' nesting sites and how to measure shoreline erosion.

Original file UTEP-CS-16-14 in pdf
Updated version UTEP-CS-16-14a in pdf


Technical Report UTEP-CS-16-13, February 2016
Updated version UTEP-CS-16-13a, April 2016
Chemical Kinetics in Situations Intermediate Between Usual and High Concentrations: Fuzzy-Motivated Derivation of the Formulas
Olga Kosheleva, and Vladik Kreinovich, and Laecio Carvalho Barros

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 332-335.

In the traditional chemical kinetics, the rate of each reaction A + ... + B --> ... is proportional to the product cA * ... * cB of the concentrations of all the input substances A, ..., B. For high concentrations cA, ..., cB, the reaction rate is known to be proportional to the minimum min(cA, ..., cB). In this paper, we use fuzzy-related ideas to derive the formula of the reaction rate for situations intermediate between usual and high concentrations.

Original file UTEP-CS-16-13 in pdf
Updated version UTEP-CS-16-13a in pdf


Technical Report UTEP-CS-16-12, February 2016
Updated version UTEP-CS-16-12a, April 2016
How to Describe Measurement Uncertainty and Uncertainty of Expert Estimates?
Nicolas Madrid, Irina Perfilieva, and Vladik Kreinovich

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 318-322.

Measurement and expert estimates are never absolutely accurate. Thus, when we know the result M(u) of measurement or expert estimate, the actual value A(u) of the corresponding quantity may be somewhat different from M(u). In practical applications, it is desirable to know how different it can be, i.e., what are the bounds f(M(u)) <= A(u) <= g(M(u)). Ideally, we would like to know the tightest bounds, i.e., the largest possible values f(x) and the smallest possible values g(x). In this paper, we analyze for which (partially ordered) sets of values such tightest bounds always exist: it turns out that they always exist only for complete lattices.

Original file UTEP-CS-16-12 in pdf
Updated version UTEP-CS-16-12a in pdf


Technical Report UTEP-CS-16-11, February 2016
Updated version UTEP-CS-16-11a, April 2016
Why Sparse? Fuzzy Techniques Explain Empirical Efficiency of Sparsity-Based Data- and Image-Processing Algorithms
Fernando Cervantes, Brian Usevitch, Leobardo Valera, and Vladik Kreinovich

Published in Proceedings of the 2016 World Conference on Soft Computing, Berkeley, California, May 22-25, 2016, pp. 165-169.

In many practical applications, it turned out to be efficient to assume that the signal or an image is sparse, i.e., that when we decompose it into appropriate basic functions (e.g., sinusoids or wavelets), most of the coefficients in this decomposition will be zeros. At present, the empirical efficiency of sparsity-based techniques remains somewhat a mystery. In this paper, we show that fuzzy-related techniques can explain this empirical efficiency. A similar explanation can be obtained by using probabilistic techniques; this fact increases our confidence that our explanation is correct.

Original file UTEP-CS-16-11 in pdf
Updated version UTEP-CS-16-11a in pdf


Technical Report UTEP-CS-16-10, February 2016
Revised version UTEP-CS-16-10a, May 2016
Voting Aggregation Leads to (Interval) Median
Olga Kosheleva and Vladik Kreinovich

Published in Proceedings of the 7th International Workshop on Reliable Engineering Computing REC'2016, Bochum, Germany, June 15-17, 2016, pp. 285-298.

When we have several results of measuring or estimating the same quantities, it is desirable to aggregate them into a single estimate for the desired quantities. A natural requirement is that if the majority of estimates has some property, then the aggregate estimate should have the same property. It turns out that it is not possible to require this for all possible properties -- but we can require it for bounds, i.e., for properties that the value of the quantity is in between given bounds a and b. In this paper, we prove that if we restrict the above "voting" approach to such properties, then the resulting aggregate is an (interval) median. This result provides an additional justification for the use of median -- in addition to the usual justification that median is the most robust aggregate operation.

Original file UTEP-CS-16-10 in pdf
Revised version UTEP-CS-16-10a in pdf


Technical Report UTEP-CS-16-09, January 2016
Updated version UTEP-CS-16-09a, April 2016
Comparison of formulations of applied tasks with interval, fuzzy set and probability approaches
Boris Kovalerchuk and Vladik Kreinovich

To appear in Proceedings of the 2016 IEEE International Conference on Fuzzy Systems FUZZ-IEEE'2016, Vancouver, Canada, July 24-29, 2016.

The focus of this paper is to clarify the concepts of solutions in linear equations in interval, probabilistic and fuzzy sets setting for real word tasks. There is a fundamental difference between formal definitions of the solutions and physically meaningful concept of solution in applied tasks when equations have uncertain components. For instance, a formal definition of the solution in terms of Moore interval analysis can be completely irrelevant for solving a real world task. We show that formal definitions must follow meaningful concept of the solution in the real world. The paper proposed several formalized definitions of the concept of solution for the linear equations with uncertain components in the interval, probability and fuzzy sets terms.

Original file UTEP-CS-16-09 in pdf
Updated version UTEP-CS-16-09a in pdf


Technical Report UTEP-CS-16-08, January 2016
Updated version UTEP-CS-16-08a, June 2017
Why Superellipsoids: A Probability-Based Explanation
Pedro Barragan and Vladik Kreinovich

Published in Reliable Computing In many practical situations, it turns out that the set of possible values of the deviation vector is (approximately) a super-ellipsoid. In this paper, we provide a theoretical explanation for this empirical fact -- an explanation based on the natural notion of scale-invariance.

Original file UTEP-CS-16-08 in pdf
Updated version UTEP-CS-16-08a in pdf


Technical Report UTEP-CS-16-07, January 2016
Updated version UTEP-16-07a, September 2016
Robustness as a Criterion for Selecting a Probability Distribution Under Uncertainty
Songsak Sriboonchitta, Hung T. Nguyen, Vladik Kreinovich, and Olga Kosheleva

Published in: Vladik Kreinovich, Songsak Sriboonchitta, and Van Nam Huynh (eds.), Robustness in Econometrics, Springer Verlag, Cham, Switzerland, 2017, pp. 57-68.

Often, we only have partial knowledge about a probability distribution, and we would like to select a single probability distribution $\rho(x)$ out of all probability distributions which are consistent with the available knowledge. One way to make this selection is to take into account that usually, the values $x$ of the corresponding quantity are also known only with some accuracy. It is therefore desirable to select a distribution which is the most robust -- in the sense the x-inaccuracy leads to the smallest possible inaccuracy in the resulting probabilities. In this paper, we describe the corresponding most robust probability distributions, and we show that the use of resulting probability distributions has an additional advantage: it makes related computations easier and faster.

Original file UTEP-CS-16-07 in pdf
Updated version UTEP-CS-16-07a in pdf


Technical Report UTEP-CS-16-06, January 2016
Why Dependence of Productivity on Group Size Is Log-Normal
Francisco Zapata, Olga Kosheleva, and Vladik Kreinovich

Published in Journal of Computing and Optimization, 2016, Vol. 3, No. 1, pp. 63-69.

Empirical analysis shows that, on average, the productivity of a group log-normally depends on its size. The current explanations for this empirical fact are based on reasonably complex assumptions about the human behavior. In this paper, we show that the same conclusion can be made in effect, from first principles, without making these complex assumptions.

File in pdf


Technical Report UTEP-CS-16-05, January 2016
Updated version UTEP-CS-16-05a, May 2016
Adjoint Fuzzy Partition and Generalized Sampling Theorem
Irina Perflieva, Michal Holcapek, and Vladik Kreinovich

Published in Proceedings of the 16th International Conference on Information Processing and Management of Uncertainty in Knowledge-Based Systems IPMU'2016, Eindhoven, The Netherlands, June 20-24, 2016, pp. 459-469.

A new notion of adjoint fuzzy partition is introduced and the reconstruction of a function from its F-transform components is analyzed. An analogy with the Nyquist-Shannon-Kotelnikov sampling theorem is discussed.

Original file UTEP-CS-16-05 in pdf
Updated version UTEP-CS-16-05a in pdf


Technical Report UTEP-CS-16-04, January 2016
Bell-Shaped Curve for Productivity Growth: An Explanation
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 40, pp. 44-47.

A recent analysis of the productivity growth data shows, somewhat surprisingly, that the dependence of the 20-century productivity growth on time can be reasonably well described by a Gaussian formula. In this paper, we provide a possible theoretical explanation for this observation.

File in pdf


Technical Report UTEP-CS-16-03, January 2016
Do we have compatible concepts of epistemic uncertainty?
Michael Beer, Scott Ferson, and Vladik Kreinovich

Published in: H. W. Huang, J. Li, J. Zhang, and J. B. Chen (editors), Proceedings of the 6th Asian-Pacific Symposium on Structural Reliability and its Applications APSSRA6, May 28-30, 2016, Shanghai, China, pp. 27-37.

Epistemic uncertainties appear widely in civil engineering practice. There is a clear consensus that these epistemic uncertainties need to be taken into account for a realistic assessment of the performance and reliability of our structures and systems. However, there is no clearly defined procedure to meet this challenge. In this paper we discuss the phenomena that involve epistemic uncertainties in relation to modeling options. Particular attention is paid to set-theoretical approaches and imprecise probabilities. The respective concepts are categorized, and relationships are highlighted.

File in pdf


Technical Report UTEP-CS-16-02, January 2016
Why Locating Local Optima Is Sometimes More Complicated Than Locating Global Ones
Olga Kosheleva and Vladik Kreinovich

Published in Mathematical Structures and Modeling, 2016, Vol. 40, pp. 39-43.

In most applications, practitioners are interested in locating global optima. In such applications, local optima that result from some optimization algorithms are an unnecessary side effect. In other words, in such applications, locating global optima is a much more computationally complex problem than locating local optima. In several practical applications, however, local optima themselves are of interest. Somewhat surprisingly, it turned out that in many such applications, locating all local optima is a much more computationally complex problem than locating all global optima. In this paper, we provide a theoretical explanation for this surprising empirical phenomenon.

File in pdf


Technical Report UTEP-CS-16-01, January 2016
On Geometry of Finsler Causality: For Convex Cones, There Is No Affine-Invariant Linear Order (Similar to Comparing Volumes)
Olga Kosheleva and Vladik Kreinovich

Some physicists suggest that to more adequately describe the causal structure of space-time, it is necessary to go beyond the usual pseudo-Riemannian causality, to a more general Finsler causality. In this general case, the set of all the events which can be influenced by a given event is, locally, a generic convex cone, and not necessarily a pseudo-Reimannian-style quadratic cone. Since all current observations support pseudo-Riemannian causality, Finsler causality cones should be close to quadratic ones. It is therefore desirable to approximate a general convex cone by a quadratic one. This cane be done if we select a hyperplane, and approximate intersections of cones and this hyperplane. In the hyperplane, we need to approximate a convex body by an ellipsoid. This can be done in an affine-invariant way, e.g., by selecting, among all ellipsoids containing the body, the one with the smallest volume; since volume is affine-covariant, this selection is affine-invariant. However, this selection may depend on the choice of the hyperplane. It is therefore desirable to directly approximate the convex cone describing Finsler causality with the quadratic cone, ideally in an affine-invariant way. We prove, however, that on the set of convex cones, there is no affine-covariant characteristic like volume. So, any approximation is necessarily not affine-invariant.

File in pdf