Many real-life objects are described by systems of linear differential equations of the type $\dot x=Ax+Bu$, $y=Cx$, where the vector $u$ is a given input (e.g., control), the vector $x(t)$ is the state of the system at the moment $t$, $y(t)$ describes the measurement results at moment $t$, and $A$, $B$, and $C$ are constant matrices. In principle, there exist analytical formulas for solving such systems, but these formulas are often computationally very difficult, and therefore, to predict the state of the system, computer simulations are used. Numerical simulation means that we compute the values of $x$, $u$, and $y$ at the moments $t_0$, $t_1=t_0+h$, $t_2=t_0+2h$, etc. If we simply use Euler's approximate formulas $x(t_{i+1})=x(t_i)+ [A_hx(t_i)+B_hu(t_i)]h$ with $A_h=A$ and $B_h=B$, we introduce an additional time-discretization error. It is better, therefore, to use the matrices $A_h\ne A$, ..., that describe the evolution of the continuous system during the time $h$.

There exist formulas that compute
$A_h$ based on $A$, ... Precise formulas are computationally
difficult, therefore, approximate formulas are used.
The use of these approximate formulas is quite sufficient if we simply
want to get a feel of the system's dynamics. However, if we want to
make conclusions and estimates that *guarantee* the properties of
the original continuous-time systems, then we need *interval*
estimates of the result: in other words, we need to design an *
interval* discrete-time system $x(t_{i+1})= A_h x(t_i)+ B_h
u(t_i)$ whose interval solution is guaranteed to contain the solution of the
original continuous-time system.
In designing this system, we must also take into consideration that
the parameters $A$, $B$, and $C$ of the original system may also be
known with an interval uncertainty, i.e., that we start with an
interval system in the first place.

The first idea is to simply apply interval computations to the known formulas that describe $A_h$, ..., in terms of $A$, ... In this manner, we do get an enclosure, but this enclosure is often much wider than the intervals that we want to enclose.

To make the intervals narrower, the authors propose the following idea: The above-described method, crudely speaking, corresponds to the case when on each interval $[t_i,t_{i+1}]$, we approximate the state $x(t)$ by a constant $x(t)=x(t_i)$. This approximation corresponds to taking the first term of the Taylor expansion. To get a better approximation, we can use more terms: $x(t)\approx x(t_i)+\dot x(t_i)(t-t_i)+\ddot x(t_i)(t-t_i)^2/2+\ldots$ If we want to use these terms, we must, in addition to describing how $x(t)$ changes with time, describe how the corresponding derivatives $\dot x(t), \ddot x(t),\ldots,$ change. The differential equations that describe how these derivatives evolve can be easily deduced by differentiating the equation for $x(t)$. As a result, to approximate a system of differential equations with, say, $n$ variables $x_1,...,x_n$, we use a discrete-time system with $N\gg n$ variables that correspond to $x_1,\ldots, x_n,\dot x_1,\ldots, \dot x_n,\ldots$ (or, equivalently, use a discrete-time system of higher order).

The authors show (on the example of an electric circuit) that higher order approximation lead to much narrower (almost optimal) enclosures.