Church thesis theoretical computer science

Proceedings of the Royal Society of London. Series A, Mathematical and Physical Sciences , —, Google Scholar T. Eggeling, D. Schlingemann and R. Etesi and I. Google Scholar M. Gu et al. Comparing causality principles. Google Scholar A. Nielsen, Phys. Peres , Quantum theory: concepts and methods Kluwer Academic Publishers , Schumacher and R.

Reversible quantum cellular automata. Schumacher and M. Sieg and J. Church's thesis meets quantum mechanics. Google Scholar R. Werner and V. Church-Turing thesis and quantum mechanics.

The Church-Turing Thesis

Private communication. The stochastic thermodynamics of computation. David H Wolpert. At any rate, many practical solvers of differential equations use numerical rather than symbolic methods for computation--any term is likely to have some error associated with it. For this reason, exact representation of coefficients, including well-defined constants like pi, is usually not required--a suitable approximation will do. In engineering applications; measured quantities are assumed to have an error from the start; it's folly to pretend otherwise and represent an imprecise measurement with an exact figure.

One of the longstanding follies of the PLT community some members, anyway , is the belief among some of us that we can do a better job of number-crunching usually using symbolic processing techniques that mathematicians who are experts in this sort of thing and generally reject symbolic approaches in favor of numeric. I always chuckle when I read somebody from the PL community ridiculing use of things like floating-point, on the grounds that their favorite BigNum implementation is more precise.

Just because computer scientists use math and are frequently domain experts in the various discrete subfields of math, including theory of computation , doesn't make us domain experts in all of mathematics. The experience that Java developers had with the numerical community ought to be an instructive lesson to all. One cannot say that "the transcedentals are uncountable". A number cannot be countable or uncountable, it's a set attribute, not a number's.

While the set of real numbers is uncountable, pi belongs to a set of computable numbers a subset of reals which are of course countable. They get discrete approximations, but to any desired precision. This could even be programmed lazily: evaluate more digits whenever the need arises.

Yuri Gurevich: Annotated Works

Granted, that's expensive on a real computer, but who's counting when the tape is infinite? PI, being an algebraic number, is of course represented as a lazy promise :. Or not. Around the magnitude of Planck's constant, things become discrete, not only charge, energy, momentum, but probably even space and time. If so, there's not even a need for diffeqs. In the limiting case, yes, I agree with you absolutely: it's nonsense to suppose that distributed systems magically have more computational power than individual Turing machines.

N Turing machines communicating are no more powerful that 1 Turing machine. I would go a lot further, and say that I even find the claim that "IMs" to quote Wegner, "interactive machines" are somehow outside of SCCT pretty absurd, not merely 'technically true but irrelevant' as I think I could paraphrase you? But you can view the process of interest as simply a "function extended in time", and simulate that function.

The first caveat is that once you have parallel composition of processes, you do often need "reactive" models of your processes. Not because the set of computable functions changes.

By Jack Copeland

But because parallel composition allows something else to happen in addition to divergance: deadlock. This is one way of looking at why we're not really changing SCCT. If you don't have it, a process which consumes two apples, then produces two oranges seems the same as a process which eats an apple, produces an orange, rinses and repeats. And the two will deadlock in different parallel-composed contexts. I'll explain the second half of that first: quite a lot of programs carry around with them if only in their authors' heads a rough proof of termination.

Programmers are in the habit of using the limiting results of Turing intuitively to know when they are getting into hot water. If you write something which is suddenly interpreting something that looks like a full-blooded language of directives The problem comes with the following admittedly erroneous piece of intuition: "I'm composing n boring processes which all, eg, behave like fancy PDAs, I feel safe.

Which brings me to the second point: parallel composition is Alice in Wonderland for some models of computation, notably PDAs. This is especially bad when people have written in DSLs which are meant to be non-Turing powerful, and then composed the results -- it is made worse by the fact that distributed computation makes it harder to "see" the algorithm which is getting run. This is, for me, where it all starts to be very LtU-relevant again: languages are notations, and implicit models of computation.

So what notations make it easy to reason about what happens when things compose in parallel? The third caveat is quite boring: some processes are intended to run forever. So you need some way of breaking them down. Most of the time, you can break them down into a "functional-reactive" style model. But even this is actually a primitive way of treating so-called interactive processes. What would I conclude out of all this? Just that Wegner is right that we need to pay attention to interactivity or as I probably prefer, following I think Nielsen et al, "reactivity" but he's wrong about the reasons.

It's not "breaking out" of the Church-Turing model of computation, it's refining it. Someday, we'll all be quite relaxed about Turing, and realise that a new model of computation doesn't have to "break the Turing barrier" to be interesting.

Working through Sipser, a textbook on Theoretical Computer Science - By

In particular Smullyan has a rather cute statement about why the second incompleteness theorem, whilst wonderful, isn't as bad as it seems: in essence, if a formal system or salesman Sure, some input tapes will be invalid following certain prefixes of output tapes If it's non-deterministic, just allow an "error, rest of tape is garbage" symbol on the output tape.

Let's try to keep the discussion at least semi-related to programming languages , if we can. It's ok to give references to resources about AI etc. Andris is our in-house guru on this matter. I wonder what he has to say. One wonders how we got along without these for so long.


I think the point of the article is that frequently Turing Machines are presented in a way that makes you believe that they can do all that a real machine can do and this is not true. A real machine can cope with a infinite number of inputs and outputs to model this you would need a infinite number of Turing Machines or use a model like Persistent Turing Machine, that is cited in the article. At first I was highly skeptical of the claims of the paper, and tended to disbelieve them.

But after thinking about it for a while, I tend to agree. I offer functional programming into evidence that Turing Machines don't capture all of computation. Namely, nearly every functional programming language sacrificed purity in the name of doing input and output. The few languages that didn't had highly awkward or non-existent IO.

  • yellow wallpaper symbolism essay.
  • Computational Complexity: The Efficient Church-Turing Thesis.
  • essay tiger extinction!

It wasn't until the invention of monads that IO become reasonably natural in a purely functional framework, a problem solved only within the last 15 years. The pure lambda calculus is exactly equivalent to Turing machines. However, since it was so difficult to integrate the pure lambda calculus with IO, it seems to me that there really is something to the authors' claims. Even then, monads don't make for a "purely functional execution", if such a thing can even be envisioned -- that is, conceptually, monads still rely on run-time side-effects for their execution. Lambda the Ultimate. User login Username:.

Create new account Request new password. Navigation recent posts. This paper seeks to explode the myth that Turing Machines TM are the universal model for all computation. In particular, it can be found in today's popular undergraduate theory textbooks: Strong Church-Turing Thesis: A TM can do compute anything that a computer can do. To understand these reasons better, we can identify three distinct claims that make up the myth, and examine them individually: Claim 1. All computable problems are function-based.