A short intro to computational complexity

A short intro to computational complexity.

Today, I’d like to talk about the computational complexity of solving Patterna puzzles. This first the post gives a short (mostly informal) introduction to computational complexity (feel free to skip this part if you are familiar with this field). I will then talk about HexCells and Patterna in the context of computational complexity in the next post and will show you why the commonly heard claim that HexCells is NP-complete is either wrong or at least very misleading.

What is computational complexity?

Let me digress for a moment and talk about computational complexity (CC). CC is a subfield of theoretical computer science (TCS), which studies theoretical aspects of computation in a mathematically rigid way. Nowadays, there are more subfields in TCS than I could reasonably enumerate, but I would like to shortly talk about two classical subfields: Computability theory (or recursion theory, which emphasizes the subfield’s origin) and computational complexity (which itself has spawned a multitude of sub-subfields). Both of these fields have a common goal:

For a given function, how difficult is it to compute said function?

Functions and computing them

This of course poses several questions:

  • What is a function?
  • What does it mean to compute such a function?
  • How is difficulty measured?

A function is a mapping from a collection of inputs to a collection of outputs . This thus associates to each input from the set of inputs an output from the set of valid outputs. We write . Being able to compute a function means that there is a (finite) sequence of steps that tells you how to effectively1 construct the output from the input (i.e., an algorithm). Here are some examples:

  • could be the collection of all finite lists of natural numbers, is the same as and takes a list and sorts it. There are plenty of algorithms available for sorting lists2, so is computable.
  • could be the collection of all road networks with two marked points in the network, the set of all sequences of directions, and the function that computes a the directions for a shortest path between the two marked points in the road network. This is basically what the navigation system in your car does3.
  • could be a the set of all possible Patterna (or HexCells) levels, the set consisting of the elements 0 and 1, and maps a level to 1 if it has a solution and to 0 if it does not have a solution. We will talk about what a solution to a level in Patterna (or HexCells) is below.

As a first approximation to the difficulty of computing a function, we might call a function difficult to compute if there is no algorithm for computing it. We’ll say more about difficulty shortly.

There are some subtleties to this definition, and I have simplified some matters a bit, but this is the general idea4.

Using decision problems instead of functions

For the purpose of the theory, it is sufficient to consider functions such as the one in the third example: Namely functions that output either 0 or 1, yes or no, good or bad, etc. Such a function partitions its input set into two halves, namely the half that contains all inputs such that and that contains all inputs such that . Such a partition of gives rise to a so called decision problem: Given an input from , decide whether or – which is just saying that we want to compute . Often we will only specific and since this then complete determines . A problem in the following is then just a decision problem. Solving the problem means computing whether for any input . Here are some examples:

  • Let be the set of all 0-1 finite sequences. The problem Parity is the set of all finite sequences that have an even number of 1’s in them.
  • Let be the set of all finite sequences of natural numbers. The problem Sorted is the set of all sorted finite sequences of natural numbers.
  • Let be the set of all Patterna (HexCells) levels. The problem Patterna (HexCells) is the set of all solvable Patterna (HexCells) levels.

Using a few clever arguments, one can show that the computation of any function can be replaced with repeatedly solving a decision problem5.

What is a difficult problem?

From what I said in the beginning, we can now conclude that a problem is difficult if there is no algorithm that solves it. This perspective is a bit harsh: There are certain problems for which there are algorithms, but these algorithms are so slow that running them on larger inputs is not feasible.

Can we make that more formal? Yes, yes, we surely can. We already said that an algorithm consists of certain elementary steps. Given some input for the algorithm, we can count the number of steps that are needed to compute the output. For example, determining whether a list of numbers is sorted will take roughly steps: Start at the beginning of the list and compare each number to its successor. If it is smaller, continue, else stop. This means if we run the algorithm on an input of size 100 it will take roughly twice as long as on an input of size 506. Note that this does not at all say anything about how long an actual implementation of the algorithm would need to run on such inputs. It merely describes how the length of the running time is expected to grow with the size of the input.

Given an algorithm we can thus speak of its runtime. The time complexity of a problem is then the minimum runtime of an algorithm that solves this problem7. Therefore, each algorithm for a problem gives an upper bound on its time complexity. For example, the time complexity of Sorted is at most linear. Similarly, to decide Parity we only need to traverse the input once, which means that the time complexity of Parity is at most linear8. Using time complexity, we can find a better definition of difficult that is closer to the actual reality of things:

A problem is efficiently solvable9 if its time complexity is polynomial. All other problems are referred to as difficult.

This means that a problem is efficiently solvable if there is an algorithm for it whose runtime is bounded by for some number , where is the size of the input10. The existence of such an algorithm usually means that there is some insight into the problem that can be used (and we are not simply testing all possible solutions). This is in stark contrast to problems where the best known algorithms have exponential runtime11, that is their runtime grows as for some number . For example, if the input is a list and the algorithm has runtime in the order of , then a list with 10 elements will take times as long as a list with a single element. A list with 20 elements will take about a million times longer to process than a list with 1 elements, and a list with 50 entries will take more than times longer than a singleton list. It should be clear that this means that solving the problem for larger inputs gets infeasible very quickly.

The classes and

The class of all efficiently solvable problems is called for polynomial time. There is another class that turns out to be very useful for discussing the time complexity of many problems that come up in real life, namely the class (non-deterministic polynomial time). A problem is in if a positive solution to it can be efficiently verified, where efficiently means in polynomial time12. That is, if an input is a yes-instance (i.e., ), then there is a short (polynomial in the size of ) proof for it, and we can check the correctness of that proof in polynomial time. Here are some examples:

  • The problem Clique is defined as follows: Given a natural number and a social network represented by a collection of people and their friendships (where we assume that friendships are mutual, i.e. if you are my friend, I am also your friend), does there exists a clique of size ? That is, are there people who are all friends with each other? This is a problem for which no efficient algorithm is currently known, but if there is such a clique and we know who is part of it, we can efficiently check that it really is a clique.
  • The problem TravelingSalesPerson is defined as follows: Given a number , a road network, and a list of cities, is there a path through the network of total length at most that visits all the cities from the list? Again, in general we do not know of an efficient way to compute such a path13 – but if someone were to give us a path, we can easily check whether it visits all the cities from the list and whether its total length is at most .
  • Any problem in is also in : The certificate is then simply a trace of the run of our efficient algorithm for . (Do not worry if this is not immediately clear. Just remember: is a part of .)

One of the most famous problems in mathematics and computer science asks

Is ?

This can be stated as:

Is finding a solution harder than verifying it?

A lot of people simply say duh, of course it is. But we are lacking a mathematical proof. For a proof of = , we would need to find a fast algorithm for some very difficult problems (thus showing that all problems in are also in ). While there have certainly been advances in the time since the definition of the classes and , we have no still no idea whether such efficient algorithms are possible for -problems. Conversely, if we wanted to proof , we would need to find a problem in that is not efficiently solvable – that is, there does not exists any efficient algorithm for that problem. An upper bound to the time complexity of a problem can be established by exhibiting a single efficient algorithm for that problem. But here we are looking for a lower bound, saying that any algorithm whats-o-ever will take a certain amount of time, so instead of a single algorithm, we now have to consider all algorithms at once. Which is a tall order.

Comparing problems

Since it seems to be very hard to come up with absolute lower bounds for the time complexity of problems, it is reasonable to look for relative lower bounds: We would like to be able to at least say how the difficulties of problems relate to each other. Are there any hardest problems? In the context of and , the right tool for this is the so-called polynomial time reduction14. The idea here is that if we can transform a problem efficiently into a problem , and we can solve efficiently, then we can solve efficiently. We say that efficiently reduces to . To restate this: If reduces efficiently to , then any efficient solution to translates to an efficient solution for : To solve , simply translate inputs for into inputs for (which is efficient by assumption) and solve (which again is efficient by assumption)15.
Here are some examples:

  • The problem IndependentSet reduces efficiently to Clique: The input for the problem is a social network and some number . We are searching for at least people, such that none of the people are friends of each other. Assuming we have an efficient algorithm for Clique, we can do the following to build an efficient algorithm for IndependentSet: Take the social network that was given as an input and modify it by removing all existing friendships and inserting friendships between all people who were not friends of each other before. Now run the efficient algorithm for Clique. Clearly, if there existed people in the original network such that none of them were friends of each other, these people will form a clique in the modified network. Further more, this modification of the network can be done efficiently by looking at every friendship (and the size of the network is at least the number of friendships, since it needs to store the friendships somehow).

Amazingly, there are problems to which all problems from can be reduced. That is, there are problems for which an efficient solution would immediately yield an efficient solution for every problem in . These problems are called -hard. If a -hard problem is also in , it is -complete.
Here are some problems from the surprisingly long list of -complete problems:

  • Clique is -complete
  • IndependentSet is -complete
  • TravelingSalesPerson is -complete

Proving a problem -complete is generally seen as good evidence that there most likely is no efficient algorithm solving the problem.

Recap

Here is the TL;DR version of the above:

  • is the class of efficiently solvable problems
  • is the class of problems for which positive instances can be efficiently verified
  • If a problem efficiently reduces to a problem and is efficiently solvable, then is efficiently solvable.
  • A problem is -hard if all problems from efficiently reduce to it
  • A problem is -complete if it is -hard and in
  • -complete problems are the most difficult16 problems in
  • Finding an efficient algorithm for an -complete problem is improbable since that would prove and earn you a million dollars.

  1. What effective means is of course up for debate (well, most people do accept the Church-Turing thesis), but for simplicity let’s say that effective simply means that it can be implemented on a computer such as the one you are using to read this.
  2. Here is a popular one.
  3. Except that the system in your car does not have the road network as an input but built into it. It simply takes to locations and computes the shortest path in the fixed map it comes with. This is of course a well studied problem.
  4. Here is a first thing to think about: How is the input from given to the algorithm? For example, how does the algorithm access the information in the map (2nd example)? In other words, we have to deal with encoding. For all intents and purposes, we can assume that input are always finite sequences consisting of 0 and 1. I hopefully do not have to convince you that any kind of finite information can be encoded in this fashion (the computer you are using right now does exactly that). This of course excludes cases where e.g. , since we would then have to manipulate infinite objects. There are ways and ideas around that, e.g. working with approximations to infinite objects, but this is getting too far from the actual point I am trying to make.
  5. The main idea is simple: Assuming maps 0-1 sequences to 0-1 sequences (i.e. sequences of bits), we can replace the computation of with computing the single bits of the output sequentially.
  6. This of course only holds for the worst case of a sorted list. We will always consider the worst-case, which may not be realistic, but simplifies things a lot (and still leaves us with a theory that has more open problems than questions answered).
  7. People often talk about the time complexity of an algorithm. I think that this terminology is grossly wrong, since there is nothing complex about the number of steps an algorithm takes. Computational complexity theory is not about algorithms (because that’s what algorithmics is all about!) but about problems. It makes sense to call a problem complex when there is no fast algorithm for it, but it does not make sense to call an algorithm complex simply because it has bad worst-case run time behavior. After all, the algorithm could simply waste time doing nothing.
  8. Such upper bounds of course depend on the specific mode of computation. We are restricting ourselves to sequential computation. In a parallel world, Parity can be solved more quickly.
  9. This is a term that a lot of people are offended by, since a runtime of is hardly efficient. They are not wrong, but we will ignore this issue here.
  10. The size of the input is of course a measure to be defined carefully (and not without its quirks). We will usually mean the length of the input encoded in bytes in some sensible form, which for lists be linear in the length of the list. This becomes more tricky when numbers are part of the input, since people always tend to first think of the runtime in terms of the number in the input, not its encoding (which is usually logarithmic, unless you are going for unary codings – which is one of these quirks mentioned above). Pseudopolynomial algorithms are what you end up with.
  11. There are of course plenty of functions that are super-polynomial but sub-exponential. We will ignore them here.
  12. The original definition is in terms of non-deterministic Turing machines, but this is taking us a bit too far off and also (in my opinion) not that helpful. Our definition can be made more formal. In a very precise way, is simply with an existential quantifier ranging over a polynomial number of bits.
  13. The classical definition is not taking a road network (i.e. a map) as its input but simply a list of distances between the cities. For an actual road network, we can do a bit better than for the general case.
  14. Formal version.
  15. This of course works since polynomials applied to polynomials yield polynomials.
  16. More fine-grained forms of reductions can of course make this more precise.

Trackbacks & Pings

  • The computational complexity of Patterna – Patterna :

    […] is not mathbf{NP}-complete. If you have no idea of computational complexity, have a look at this blog post in which I give a short introduction to computational complexity. Everything in this post also […]

    11 months ago

Leave a Reply

Your email address will not be published. Required fields are marked *