Why Theory of Computation Is Important?

Question: From the practical viewpoint, why is it necessary to study theory of computation?

Answer: When we encounter several similar practical problems, it is often beneficial not to solve them one by one, but to find a general algorithm that would enable us to solve all these problems.

How general should it be? Sometimes, practitioners try to solve the problems in such a generality that no general algorithm is possible -- and, of course, do not succeed. To avoid wasting time on such impossible efforts, it is desirable to know which problem can be algorithmically solved and which cannot. Understanding which problems can be algorithmically solved is one of the main objectives of theory of computation.

In situations when an algorithm is, in principle, possible, sometimes, the only possible algorithms requires so much computation time -- e.g., longer than the lifetime of the Universe -- that they are not practically feasible. It is therefore desirable to know if the given problem can be feasibly solved. This is also one of the main objectives of theory of computation.

Example: When we write a program, it is desirable to be able to check that this program always halts, and that it always produces a correct result. It may therefore seem reasonable to look for an algorithm that, given a program, checks whether it halts or not -- but theory of computation proves that such algorithms are not possible.

Similarly, it may seem reasonable to look for a general algorithm that, given a program that always halts, checks whether this program always produces the correct result -- but theory proves that such algorithms are not possible either.

Question: Why do we need to study primitive recursive functions?

Answer: Primitive recursive functions describe for-loops. The advantage of using only for-loops in a program is that such programs always halt, never go into an infinite loop. It is therefore important to analyze whether a given task can be computed by such a program.

Question Why do we need to study decidable and recursively enumerable (r.e.) sets

Answer: The notion of a set is a basic notion of mathematics. We study sets starting from an early age, we have an intuition about sets. It is desirable to use this intuition when studying what is computable. This is what decidable and r.e. sets do: enable us to reformulate questions about algorithms into questions on when sets are, e.g., decidable. After this reformulation, we can use our set intuition to solve the corresponding problems.

For example, in set terms, union and intersection are natural operations. So, we can easily prove results about decidability of union and intersection.

Question: Why we do need to study Turing machines?

Answer: In many cases, we need to prove that some problem is not algorithmically solvable. If by algorithm, we mean a Java program, then, since Java programs can include many different constructions, it is difficult to prove that none of these constructions leads to the desired result.

However, it has been proven that what can be computed by a Java program can also be computed on a Turing machine. Thus, to prove that something is not computable, it is sufficient to prove that it is not computable on a Turing machine. Turing machines are much simpler than general programs, so proving that some problem cannot be solved on a Turing machine is often easier.

Question: Why is it interesting to study the propositional satisfiability problem?

Answer: To satisfactorily test a program that contains branching, we need to test both possible branches. In other words, if we have a conditional statement of the type
if (P) ... else ...,
we need to find the case when P is true. The condition P can be a propositional combination of different propositional variables. In this case, it is important to be able to find the values of these variables that make P true. This is exactly propositional satisfiability problem.

Question: What do we gain, from the practical viewpoint, when we prove that a problem is NP-hard?

Answer: First, if a problem is NP-hard, then, unless P = NP (which most computer scientists believe to be impossible), no feasible algorithm is possible for solving all particular cases of this problem. Thus, we should not waste time looking for such a general algorithm.

Second, by definition, the fact that a problem is NP-hard means that every problem from the class NP can be reduced to it. Thus, if we have a good heuristic feasible algorithm for solving some particular cases of this problem, then reduction automatically generates algorithms for solving particular cases of all NP-problems. This reduction has been helpful in many cases.

Question: Why do we sometimes need to use probabilistic methods and heuristic methods that do not always work?

Answer: When a problem is NP-hard, then, unless P=NP, no feasible algorithm is possible that would always solve all the instances of this problem. In this case, we are looking for feasible algorithms that solve at least some instances; this is why we study heuristic and probabilistic methods.

Question: Why is it interesting to study polynomial hierarchy?

Answer: While many practical problems belong to the class NP, there are many important practical problems which are not in NP.

For example, the problem of finding an engineering design that satisfies given specifications is in the class NP, since once a design is presented, we can feasibly check whether it indeed satisfies the specifications. So, the problem of checking the existence of the desired design y takes the form ∃y A(y) for a feasible predicate A(y), and thus, belongs to the class NP = Σ1P.

However, if we are looking for an optimal design -- e.g., the design whose cost is the smallest among all designs that satisfy specifications -- then it is no longer clear how to check that a given design in indeed optimal. The only general way to do it to check that the given design y is better that all possible designs z, i.e., that ∀z A(y, z) for a feasible property A(y, z). So, the problem of checking the existence of such optimal design y takes the form ∃y ∀y A(y, z) and thus, belongs to the next class Σ2P.

For games, the situation is even more complicated: e.g., if we want to win in 3 steps, this means that we need to find a move x such that for every opponent's move y, we will be able to find a reply z that leads to a win. The existence of such a winning strategy is equivalent to ∃x ∀y ∃z A(x, y, z), and thus belongs to the class Σ3P.

For more than 3 steps, we get even more complex formulas.

Question: Why do we need to study computing with oracles?

Answer: Traditionally, the computational complexity of an algorithm is determined by the number of its elementary steps. This works well if we have written this algorithm from scratch ourselves. However, in practice, we usually use others' methods as subroutines. Even if we program in Java, we do not program, e.g., sin(x) ourselves: we rely on built-in methods for computing special functions.

In general, many of these auxiliary methods are proprietary, we can only use them as black boxes -- we input a value, we get a result. Since we do not know how many elementary operations are preformed within this method, we therefore cannot estimate how many computational steps our algorithm takes. What we can do is count the number of our own elementary steps and the number of calls to these auxiliary methods.

Once we fix the list A of such auxiliary methods, we can thus consider, e.g., the class of all the A-using algorithms for which the number of our own steps + the number of calls to A is bounded by a polynomial -- this is what is meant by the class PA, class of algorithms which are polynomial-time with respect to the oracle A.