# A Short Note on the Halting Problem

April 22, 2008

Over at Shtetl-Optimized, Scott Aaronson recently had an open thread where he invited readers to pose questions for him (this apparently is what Scott considers relaxation after a hard week). There was an interesting exchange in the comments that caught my eye, but since Scott has already closed the comments on that post, I thought I’d write about it here.

A commenter named Abel posed the following question:

Are there any interesting results concerning subsets of Turing Machines for which the Halting Problem can be solved?

For example, it is trivial to see that machines for which you cannot go back to a previously visited state do not halt. But it would be interesting if there were any non-trivial result for a subset in which there are both machines that halt and that doesn’t halt.

A little further down in the comments, Scott replies that a reader has emailed a response:

Matthew P. Johnson writes in to answer Abel’s question, on whether there exists a “nontrivial” but structurally-defined subset of Turing machines for which the halting problem is decidable:
Supposedly, the subset is extremely close to trivial, i.e., almost all of them (!):
“The halting problem for Turing machines is decidable on a set of asymptotic probability one. Specifically, there is a set B of Turing machine programs such that (i) B has asymptotic probability one, so that as the number of states n increases, the proportion of all n-state programs that are in B goes to one; (ii) B is polynomial time decidable; and (iii) the halting problem H intersect B is polynomial time decidable. The proof is sensitive to the particular computational model.”
“The halting problem is decidable on a set of asymptotic probability one”
http://arxiv.org/abs/math/0504351

Since that’s a seemingly remarkable result, I’d like to look at it a little closer, and then offer a different response to Abel’s original question.

The halting problem is decidable on a set of asymptotic probability one is written by Joel David Hamkins and Alexei Miasnikov, both of the City University of New York. The paper appears to be based on a talk by Hamkins at the Fall 2004 CUNY Logic Workshop.

Since they mention that their proof is sensitive to the particular computational model they use (actually, it would be more precise to say result instead of proof), let’s start by taking a look at that. They use a fairly standard, if bare-bones, Turing Machine model. Their TM uses an alphabet consisting of {0,1} and a single tape which extends infinitely to the right.If Q is the set of (non-halting) states of the TM, then the transition function is given by:

δ: Q×{0,1} → (Q ∪ {halt})×{0,1}×{L,R}.

This allows the easy calculation that for a given size n (of number of states), there are (4(n+1))2n distinct TMs.

Now we come to the one little (but critical) feature of their model: if the head is located on the left-most cell, and the TM attempts to move the head to the left, then the computation fails in a non-halting condition. (We can suppose that this TM is implemented in Windows NT, and instead of failing gracefully, we get a “blue screen of death” when the TM attempts an illegal move.) We’re ready to look at their main result.

Define B to be the set of TMs that, on an input tape initialized to all zeros, either halt or fall off the left edge of the tape before repeating a state.

Now, I imagine you can already see where this is going. We can obviously tell if an arbitrary TM is an element of B and if so whether it halts or not, by simply simulating it for n steps. (I wouldn’t call that a structural property of the TM, but in fairness the authors do not describe it that way, either.) The only interesting question is whether B has asymptotic probability one. For this, the authors invoke a result on random walks due to Polya:

In the random walk with equal likelihood of moving left or right on a one-way infinite tape, beginning on the left-most cell, the probability of eventually falling off the left edge is 1.

This can be seen intuitively by imagining the behavior of the TM’s tape head. At each step it can move either left or right. Assuming the TM doesn’t halt first, as n gets larger it becomes increasingly likely that the head will return to the starting point; and each time it does there is a 50% chance it will fall off the left edge on the next step. There is some work involved in making this into a rigorous argument, but basically that’s the entire paper right there. (By the way, for some more interesting background on random walks, see Brian Hayes’ recent post at bit-player.)

I wouldn’t blame you if you find that result less than satisfying.

I think I can offer a somewhat more satisfying answer for Abel. The canonical example of a computing model for which the halting problem is decidable is the linear-bounded automata, or LBA. In an LBA, as well as other space-bounded models, it is possible to tell if a TM is going to loop by simply letting it run long enough to exhaust all possible configurations. It is possible to enforce this syntactically for arbitrary TMs by adopting some conventions for the presence of an outer “control module” that enforces tape boundaries, etc. This same approach also works for syntactically checking for membership in time-bounded classes like P and NP, although of course the halting question is moot for those classes.

This agrees with our intuition about real-life computing. Physical computers are, as a practical matter, space-bounded machines, and they may as well be treated as time-bounded also. For most algorithms we have a pretty good idea of how long they should run, and if a computer program is taking too long to finish we don’t hesitate to pull the plug on it and start examining our code for where we screwed up.

posted in computer science by Kurt

### 5 Comments to "A Short Note on the Halting Problem"

1. arkar wrote:

Thank you for your response. If I am not wrong (I could easily be), the second set is actually a subset of the first one (although the result for the second one does not use densities). I wonder now about the existence of a maximum set…

2. Abel wrote:

Well, the last comment should be signed by “Abel”

3. Kurt wrote:

Hi Abel,I was wondering if you would find this post, since there was no way to ping back to Scott’s post.I’m not sure I understand your comment about the second set being a subset of the first, though.

4. abel wrote:

What I wanted to say is that if I am not wrong, for each LBA, we can find an equivalent Turing Machine of the same kind as the ones used in the paper.Of course, the way I phrased it before was pretty nonsensical.

5. Kurt wrote:

Not really. The matter of how the TM is implemented is separate from whether it is an LBA or not. In their paper, <>B<> is the set of TMs that ‘crash’ almost immediately upon starting. They deliberately choose a TM model in which nearly <>all<> machines crash almost immediately upon starting, so as a result <>B<> contains almost all TMs in that model. But it’s a totally superficial result.You could, for whatever perverse reason, implement LBAs in something like their TM model, and indeed if you did that, most of them would crash almost immediately upon starting. But those would not be very interesting programs — they wouldn’t even be able to read all of their input. The interesting LBAs, that actually do some nontrivial computing, all lie outside of <>B<>.