Test 2, CS 5353, Fall 2008

Date: Thursday, November 20, 2008.

Name (please print): ___________________________________________________________________________________

1. Data processing: interval uncertainty.
Let us assume that data processing is performed by using a function y = f(x1, x2) = x12 + x2, the measurement results are X1 = 1.0 and X2 = 2.0, and we only know the upper bounds on the measurement errors: Δ1 = 0.1 and Δ2 = 0.2. Find the upper bound Δ on the approximation error Δy of the result of data processing. Since the function is monotonic, you can compute the exact range. Compare this range with the results of linearized computations.

2. Interval uncertainty in data processing: computational aspects.
For the formula for computing upper bound on the error the result of data processing, explain how the computational complexity (= number of computational steps) depends on the choice of the parameters hi used in numerical differentiation, and what is the choice for which the computational complexity is the smallest.

3. Interval uncertainty in data processing: Monte-Carlo method.
Explain why Monte-Carlo method is useful in computing interval uncertainty of the result of data processing, and for what number of inputs it is useful.

4. Bisection method. Describe a bisection method for finding the value x on a given interval [l,u] for which f(x) = 0. Follow the first two steps of this method for finding the square root of 2, i.e., the value x from the interval [0,2] for which f(x) = x2 - 2 = 0. Explain where the bisection method is used in Monte-Carlo estimation of interval uncertainty.

5. Interval uncertainty in data fusion.
Let us assume that we have made three measurements of the same quantity x.

How can we fuse these measurement results? In other words, based on these measurement results, what can we conclude about the actual (unknown) value of the quantity x?

6. Effect of reliability: case of independence.
Let us assume that we are producing a 1D map, and that to approximate a value on a grid point, we use the 1-Nearest Neighbor (1NN) algorithm, i.e., we take the values of the nearest point at which the measurement was performed. We are interested in estimating the value at a point x = 1. We have made three measurements:

If all these measurements were 100% reliable, we would take the value v1 = 1.0 at the nearest point x1 = 0.7 as the desired estimate v. However, in reality, the values vi are not absolutely reliable: We assumed that these probabilities describe independent events. Estimate the variance of v based on this information. What is the probability that all three estimates are wrong and thus, that we are unable to make any estimates?

7. Monte-Carlo approach: need for re-scaling.
Explain why for very reliable components, we cannot directly use the Monte-Carlo method, we need a re-scaling. Provide a numerical example of the number of iterations that are needed to achieve a given accuracy. Describe the main idea of the re-scaling and how it helps.

8. Possibility of re-scaling: case of independence.
On the example of two cases:

describe how the probability Δp (that f does not trust s) changes if we re-scale the values Δp1 and Δp2.

9. Probabilities: case of possible dependence.
Assume that the probability p(A) of the event A is 0.8, and the probability p(B) of the event B is 0.8.

10. Reliability: case of possible dependence.
On the example of two cases:

estimate the worst-case probability Δp that f does not trust s. Use the algorithm from the handout that we studied in class (and which you implemented in your programs.)