The ideal decision making situation is when we know what characteristic $J$ we want to maximize, and we know the values $J(a)$ of this characteristic for all alternatives $a$. In real life, even when we know $J$, we often do not know $J(a)$ precisely; we only know the {\it interval} $[J^-(a),J^+(a)]$ of possible values of $J(a)$. How do we then choose $a$?

It is clear that if for some alternatives $a$ and $b$, we have $J^-(a)>J^+(b)$, then $b$ is worse than $a$, and thus, $b$ will not be chosen. In many cases, however, after applying this "rule", we still have a lot of alternatives to choose from. For example, if we want to buy the most fuel efficient car, and the choice is between a car $C_1$ with fuel consumption of 8-10 liters per 100 km and a car $C_2$ with fuel consumption 9-12, then, according to the above criterion, both cars have to be considered. Common sense says that it is reasonable to choose the second car.

In general, if $J^-(a)>J^-(b)$ and $J^+(a)>J^+(b)$, then we can prefer $a$ to $b$.

The author applies this approach to 0-1 linear programming problems ($\sum c_jx_j\to\max$ under the conditions $\sum a_{ij}x_j\le b_i$, $x_j\in\{0,1\}$), with interval bounds on the coefficients $a_{ij}$, $b_i$, and $c_j$. Every problem of this type is reduced to two similar problems with real-valued coefficients.