The massive scale of current and next-generation massively parallel processing (MPP) systems presents significant challenges related to fault tolerance. In particular, the standard approach to fault tolerance, application-directed checkpointing, puts an incredible strain on the storage system and the interconnection network. This results in overheads on the application that severely impact performance and scalability. The checkpoint overhead can be reduced by decreasing the checkpoint latency, which is the time to write a checkpoint file, or by increasing the checkpoint interval, which is the compute time between writing checkpoint files. However, increasing the checkpoint interval may increase execution time in the presence of failures. The relationship among the mean time to interruption (MTTI), the checkpoint parameters, and the expected application execution time can be explored using a model, e.g., the model developed by researchers at Los Alamos National Laboratory (LANL). Such models may be used to calculate the optimal periodic checkpoint interval. In this paper, we use the LANL model of checkpointing and thorough mathematical analysis we show the impact of a change in the checkpoint latency on the optimal checkpoint interval and the overall execution time of the application.
For checkpoint latencies, d1 and d2, and the corresponding optimal checkpoint intervals, t1 and t2, our analysis shows the following results: (1) For a given MTTI, if d1 is greater than d2, t1 is greater than or equal to t2. (2) When the checkpoint interval is fixed, a decrease in checkpoint latency results in a decrease in application execution time. (3) A reduction in checkpoint latency, from d1 to d2, and a corresponding change of the checkpoint interval from the optimal checkpoint interval associated with d1, t1, to that associated with d2, t2, translates to reduced application execution time when the difference between t1 and t2 exceeds a certain threshold value, which can be as large as 12% of t_opt.
In terms of application execution times, the approximation error of the optimal checkpoint interval is not significant. However, when we consider other performance metrics of the application, such as network bandwidth consumption and I/O bandwidth consumption, we conjecture that the information obtained by the analysis presented in this report could be of value in reducing resource consumption.
The full paper is available in
pdf
Molecules and low-dimensional systems such as atomic clusters, quantum dots,
nanotubes show interesting novel properties that are quite different from
bulk materials. Computational studies play an important role
in understanding fundamental properties of these systems, which in turn
is useful in manipulating these systems for possible technological
application. In this talk, I will present an overview of first
principle based density functional calculations on molecular systems.
In particular I will present our calculations on various novel systems
such as light harvesting molecular solar cell, molecules with large
magnetization reversal energy, inorganic cage molecules and clusters of
metal atoms.
While numerical methods for single-field problems have reached a high
degree of maturity already, computational methods for coupled problems have been
far less developed. The reason is that the solution of a coupled problem is much
more difficult compared to the solution of any single-field problem involved.
This is due to the fact that
References
P. Monk, Finite Element Methods for Maxwell's Equations, Clarendon
Press, Oxford, 2002.
P. Solin, J. Cerveny, and I. Dolezel, Arbitrary-Level Hanging Nodes and
Automatic Adaptivity in the hp-FEM, accepted to Math. Comp. Sim., 2006.
P. Solin, J. Cerveny, and L. Dubcova, Adaptive Multi-Mesh hp-FEM for
Linear Termoelasticity, CMAME, submitted, November 2007.
P. Solin, K. Segeth, and I. Dolezel, Higher-Order Finite Element
Methods, Chapman & Hall/CRC Press, 2003.
Combining simulations with advanced synchrotron techniques
accelerates understanding of structure in disordered, amorphous and
surface materials. The acceleration of this understanding has
allowed rapid development of structure/function co-relations that
advance commercial application of novel materials. In this
presentation we review the application of simulations using CERIUS2
(ACCELRYS CORP.) programs and multiple synchrotron characterization
techniques to two classes of materials defined as "surface
compounds." One class of surface compounds is materials like
MoS(2-x)C(x) that are widely used petroleum catalysts that improve
the environmental properties of transportation fuels. These
compounds may be viewed as "sulfide supported carbides" in their
catalytically active states. The second class of "surface compounds"
is the "Maya Blue" pigments that are based on technology created by
the ancient Maya. These compounds are organic/inorganic "surface
complexes" consisting of the dye indigo and palygorskite, a common
clay. Both of these materials are currently being commercialized
based on the results obtained through the combination of simulation
and synchrotron studies.
In electromagnetic-thermal coupled problems such as microwave-heating,
the electric field deposits energy into the heated material which causes
its temperature to rise. On the other hand, the rising temperature changes
certain dielectric properties of the material which has an effect on
the original electromagnetic field. In this way, the processes are coupled
both ways. Time-harmonic Maxwell's equations are used to model the electromagnetic
field and the thermal part of the process is modeled using the heat-transfer
equation. We present a monolithically coupled model where the Maxwell's equations
are discretized using higher-order edge elements, and standard continuous
elements are used for the temperature. As an introduction, we say a few words
about the hp-FEM and automatic adaptivity. Numerical examples of microwave
heating are presented.
References
P. Monk, Finite Element Methods for Maxwell's Equations, Clarendon
Press, Oxford, 2002.
P. Solin, J. Cerveny, and I. Dolezel, Arbitrary-Level Hanging Nodes and
Automatic Adaptivity in the hp-FEM, accepted to Math. Comp. Sim., 2006.
P. Solin, K. Segeth, and I. Dolezel, Higher-Order Finite Element
Methods, Chapman & Hall/CRC Press, 2003.
Consistent Hashing is a relatively new approach to
constructing an index among a set of distributed systems called a
Distributed Hash Table (DHT). The set of systems is partitioned
through the assignment of random node identifiers, and key-value
mappings are stored in hosts host whose identifiers are "nearest" to
the hash of the mapping's key.
When the number of participating hosts is large and changes over
time, it becomes impractical to consistently disseminate the current
set of participants and their identifiers, but nonetheless, it is
desirable to permit efficient indexing despite this lack of global
knowledge. Techniques have been developed to permit participants to
efficiently construct and maintain routing infrastructures that
permit hosts to cooperatively discover others near to any particular
identifier through a process of O(log(N)) communications, where N is
the number of participating systems.
This talk will present an introduction of consistent hashing,
distributed hash tables, and open research challenges related to
them.
Bio: Eric Freudenthal is an Assistant Professor in the Computer
Science department at UTEP. He is one of the originators of
Coral/CDN, a self-organizing content distribution network that runs
upon Planetlab and serves tens-of-millions of web pages daily.
Coral's indexing mechanism is a novel "sloppy" DHT that provides
bandwidth that scales linearly with offered load.
While peer-to-peer systems are notorious for their ability
to exploit available resources to efficiently disseminate
information, these systems provide no guarantees of data integrity.
Self-validating authenticated dictionaries built upon (merkle-)
hash-trees can be incrementally distributed upon a peer-to-peer
framework and thus can disseminate key:value mappings with strong
integrity guarantees.
It is desirable to permit the publisher of an authenticated
dictionary to efficiently disseminate updates with equivalent
integrity properties. Several approaches have been proposed.
Fern's data structure and algorithm are significantly simpler than
others' approaches, and Fern's performance is competitive or
superior. We present the structure of Fern and an informal analysis
of the number of vertices of a Fern-trie that must be transferred to
systems that monitoring a set of key:value mappings stored within
Fern.
Since the 1960s, many algorithms have been designed to deal with
interval uncertainty. In the last decade, there has been a lot of
progress in extending these algorithms to the case when we have a
combination of interval and probabilistic uncertainty. We
provide an overview of related algorithms, results, and remaining
open problems.
Klondike is the well-known 52-card Solitaire game available on
almost every computer. We have investigated the complexity of
determining whether an n-card Klondike initial configuration can
lead to a win. To talk about complexity, we need to generalize the
game with an arbitrary number of cards. We have shown that the
problem NP-complete. A proof of this can be presented to a
mathematically oriented general audience. The problem remains
NP-complete when only three suits are allowed instead of the usual
four. When only two suits of opposite color are available, the
problem is NLog-space-hard. When the only two suits have the same
color, two restrictions are shown in AC0 and in NL respectively.
When a single suit is allowed, the problem drops in complexity down
to AC0[3], that is, the problem is solvable by a family of constant
depth unbounded fan-in {and, or, mod3}-circuits. Other cases are
studied: for example, “no King” variant with an arbitrary number of
suits of the same color and with an empty “pile” is
NLog-space-complete.
Topological spaces are fundamental objects of study in Mathematics.
In a given topological space, one can define operations known as the
closure and limit point operators. These different, but related,
operators each naturally give rise to a Boolean algebra with
additional operations (BAO for short). In the 1940’s, McKinsey and
Tarski axiomatized the BAO arising from the closure operator of a
dense-in-itself metrizable space. As a corollary, the BAO’s coming
from the closure operator of the real numbers, the rational numbers,
and the Cantor space each have the same axiomatization. In the
spirit of McKinsey and Tarski’s work, during the 1990’s Shehtman
axiomatized the BAO arising from the limit point operator of a
0-dimensional dense-in-itself metrizable space. From Shehtman’s work
it follows that the BAO coming from the limit point operator of the
rational numbers has the same axiomatization as the BAO coming from
the limit point operator of the Cantor space. In this talk we
present a new and simplified proof of this axiomatization, which
takes advantage of new techniques in modal logic introduced in the
past decade.
In order to investigate the stochastic dominance
approach to decision-making under risk and uncertainty in
mathematical finance, in a more realistic setting, we re-examine the
current practice of SD. Several improvements and extensions have
been found, including procedures on approximations of utility
functions, elimination of the assumption of differentiability of
utility functions, as well as necessary and sufficient conditions
for SD rules.
We provide new perspectives on the semantics of
logic programs with abstract constraints. Abstract constraints are
formal entities that have been introduced to enable the analysis of
various classes of extensions of logic programming, including
aspects like collections and aggregates. To this end we introduce
several notions of computation and propose to use the results of
computations as answer sets of programs with constraints.
We discuss the rationale behind different classes of computations
and study the relationships among them and among the corresponding
concepts of answer sets. The proposed semantics generalize the
answer set semantics for programs with monotone, convex and/or
arbitrary constraints described in the literature.
Today there are many methods for solution of equation with interval
parameters. Unfortunately there are very few efficient methods which
can be directly applied for solution of complex engineering
problems. Sensitivity analysis method gives very good inner
approximation of the exact solution set. This method was implemented
in C++ language by the author and the program can be recompiled on
Windows, Linux and Solaris operating systems. The program is able to
solve 1D, 2D and 3D linear problems of electrostatics with interval
parameters. In order to describe structure with uncertain parameters
special scripting language was applied. The program is able to solve
problems using endpoint combination method and Taylor expansion
method. Additionally it is possible to solve problems with uncertain
functional parameters. In order to do that it is necessary special
finite elements. The program is very universal and can be applied to
the solution of complex engineering problem. The program is a part
web application, which is written in php language and can be run on
the authors web page http://andrzej.pownuk.com. The program is
object oriented and can be very easily extended to the solution of
more complicated problems like for example nonlinear problems of
computational mechanics.
Tunna Baruah
(Department of Physics, UTEP)
UNDERSTANDING MOLECULAR PROPERTIES FROM DENSITY FUNCTIONAL THEORY
Jakub Cerveny and Pavel Solin
(Department of Mathematical Sciences, UTEP)
ON MULTI-MESH HP-FEM FOR COUPLED PROBLEMS
In this talk we present results and main ideas of our research whose
goal is to address both issues mentioned above: In order to resolve individual
phenomena in various physical fields efficiently, we consider every solution
component on an individual mesh where, moreover, individual adaptive strategy is
employed. At the same time, each physical field is discretized using higher-order
finite elements which conform to the corresponding Sobolev space.
Two examples are presented. The first is related to incompressible flow with
heat transfer in 2D, where all fields v1, v2, p, ... are discretized on
different meshes. We demonstrate that our new multi-mesh approach leads to
smaller and better conditioned linear systems, which in turn require less CPU
time to solve. The second example is concerned with a model of stationary linear
thermoelasticity where each displacement component and the temperature are
approximated on different meshes which are equipped with individual energy-based
adaptivity mechanisms. Here, the multi-mesh approach improves the convergence
of the hp-adaptive process.
Russell R. Chianelli
(Materials Research and Technology Institute, UTEP)
SIMULATION AND SYNCHROTRON TECHNIQUES APPLIED TO PROBLEMS IN
MATERIALS SCIENCE: CATALYSTS AND AZUL MAYA PIGMENTS
Lenka Dubcova and Pavel Solin
(Department of Mathematical Sciences, UTEP)
COMPUTER MODELING OF AN ELECTROMAGNETIC-THERMAL COUPLED PROBLEM
Eric Freudenthal
(Department of Computer Science, UTEP)
CONSISTENT HASHING: AN EFFICIENT APPROACH FOR DISTRIBUTED
INDEXING
Eric Freudenthal and Luc Longpre
(Department of Computer Science, UTEP)
FERN: A MUTABLE AUTHENTICATED DICTIONARY FOR PEER-TO-PEER
SYSTEMS
Vladik Kreinovich
(Department of Computer Science, UTEP)
APPLICATION-MOTIVATED COMBINATIONS OF INTERVAL, AND PROBABILITY
APPROACHES, WITH APPLICATION TO GEOINFORMATICS, BIOINFORMATICS, AND
ENGINEERING
Luc Longpre
(Department of Computer Science, UTEP)
Pierre McKenzie
(U. de Montreal)
SOLITAIRE
Joel Lucero-Bryan
(Department of Mathematical Sciences, NMSU)
A RESULT CONCERNING THE RATIONAL NUMBERS
Hung T. Nguyen and Hien Tran
(Department of Mathematical Sciences, NMSU)
ON FOUNDATIONS OF STOCHASTIC DOMINANCE IN MATHEMATICAL FINANCE
Enrico Pontelli and Son Tran
(Department of Computer Science, NMSU)
LOGIC PROGRAMMING WITH ABSTRACT CONSTRAINT ATOMS
Andrzej Pownuk
(Department of Mathematical Sciences, UTEP)
GENERAL INTERVAL FEM PROGRAM BASED ON SENSITIVITY
ANALYSIS METHOD