Computerized Math, Formal Proofs and Alternative Logic
March 16, 2013 3:33 PM Subscribe
Using computer systems for doing mathematical proofs - "With the proliferation of computer-assisted proofs that are all but impossible to check by hand, Hales thinks computers must become the judge."
Three years ago, Vladimir Voevodsky, one of the organizers of a new program on the foundations of mathematics at the Institute for Advanced Study in Princeton, N.J., discovered that a formal logic system that was developed by computer scientists, called "type theory" could be used to re-create the entire mathematical universe from scratch. Type theory is consistent with the mathematical axioms, but couched in the language of computers. Voevodsky believes this alternative way to formalize mathematics, which he has renamed the univalent foundations of mathematics, will streamline the process of formal theorem proving. Voevodsky and his team are adapting a program named Coq, which was designed to formally verify computer algorithms, for use in abstract mathematics.also btw, speaking of mathematical revolutions, from a historical perspective, check out The Man of Numbers: Fibonacci's Arithmetic Revolution - "Before the 13th century Europeans used Roman numerals to do arithmetic. Leonardo of Pisa, better known today as Fibonacci, is largely responsible for the adoption of the Hindu–Arabic numeral system in Europe, which revolutionized not only mathematics but commerce and trade as well. How did the system spread from the Arab world to Europe, and what would our lives be without it?"
I talked to Wolchover a lot for this piece and I thought she did a good job combining a lot of material about a lot of different strands of mathematics into one space. It might be worth saying that at the moment hardly anyone is following what Voevodsky's group is doing (or at least I don't know anyone who's following it.) Of course that can change very quickly when an approach starts bearing fruit!
posted by escabeche at 3:52 PM on March 16, 2013 [3 favorites]
posted by escabeche at 3:52 PM on March 16, 2013 [3 favorites]
Interesting stuff that I just barely can't understand. Thank, kliuless, for the interesting articles and for keeping me humble.
posted by benito.strauss at 4:31 PM on March 16, 2013
posted by benito.strauss at 4:31 PM on March 16, 2013
That's a nice article. For a long time I've wished that mathematics textbooks and papers came with electronic versions giving fully formal proofs, but in a format where only the sketchiest top level is immediately visible. Then, if there's a step you don't understand, you just pop it open and get more detail ... and so on, until the lowest level consists of steps that are so trivial that it's impossible not to understand them. Leslie Lamport (of LaTeX) proposed a proof style like that in this paper (PDF), but I guess it hasn't caught on. But if a project to develop a really useable proof-assistant gains some momentum then maybe someone will develop a system like this as a readability layer on top of that?
(Disclaimer: I'm in the middle of writing a grant proposal to work on stuff related to Voevodsky's Univalent Foundations work.)
posted by logopetria at 4:46 PM on March 16, 2013 [11 favorites]
(Disclaimer: I'm in the middle of writing a grant proposal to work on stuff related to Voevodsky's Univalent Foundations work.)
posted by logopetria at 4:46 PM on March 16, 2013 [11 favorites]
a formal logic system that was developed by computer scientists, called "type theory" could be used to re-create the entire mathematical universe from scratch.Okay, this just sounds kind of strange to me. While Type theory can be used in computer science, like Weston said it was originally created before computers even existed specifically to help build a foundation to mathematics - that project was kind of killed by Godel's incompleteness theorem, which proved that any type system powerful enough to prove all mathematics could be used to compose paradoxical statements.
What Russel and others wanted to do was build a system that could tell if any mathematical statement was true or false - but we know now we can't do that. Both from Godel's incompleteness theorem, as well as the halting problem We can't ask "Will this program halt" and always get an answer.
But the mathematical theories, like Type Theory that Bertrand Russel and others came up with paved the way for Church, Turing, etc and basically created computer science.
So really Computer Science branched off from this kind of math. And the kind of ultra-theoretical computer science stuff that involves Type Theory is basically just math.
In fact, the Curry–Howard correspondence says that computer programs and mathematical proofs are actually the same thing.
So to say that these are math people "taking ideas" from computer science and using them to form a basis for mathematics doesn't really sound right to me - instead it seems like this is a problem or set of problems that people have been working on since the 1900s, some of whom might consider themselves computer scientists, and some of whom who might consider themselves pure mathematicians.
Maybe it's the case that computer scientists kept it going because it was useful for computer science even if it couldn't be used as a basis for a universal mathematics that could solve any problem due to Godel's incompleteness theorem.
Also, I'm pretty sure Coq was created as a mathematical tool, rather then as a practical programming tool.
posted by delmoi at 4:50 PM on March 16, 2013 [7 favorites]
Previously: An application of theorem proving is TheoryMine (Mefi Thread, paper [pdf]), which uses Isabelle to create and prove theorems, and then sells the naming rights. The theorems aren't terribly interesting, as they concern randomly generated recursive types and functions.
One interesting bit in their FAQ:
posted by Monday, stony Monday at 4:52 PM on March 16, 2013
One interesting bit in their FAQ:
HOW CAN I BE SURE THAT THE PROOF OF MY THEOREM IS CORRECT?And here's an interesting set of slides on automatic theorem proving.
The computer programs behind TheoryMine are large and complex and so, like all large complex programs, almost certainly contain bugs However, most of the program merely chooses theories and theorems to prove, and directs their proof. The construction of the proof is handled by a small, well inspected and highly trusted kernel program that only combines axioms and previously proved theorems. It is therefore vanishingly unlikely that any proof it constructs would be faulty.
posted by Monday, stony Monday at 4:52 PM on March 16, 2013
In February, I went to hear Jeremy Avigad (philosopher from CMU, mentioned in the article), give a really interesting joint math-philosophy colloquium talk at my university about understanding and formal verification. You can see his slides here (pdf).
One thing I would really like to understand better about formal verification and computer assisted proof is what conceptual trade-offs there are between using a system like coq, which uses a constructive (intuitionistic) logic, and a system like Isabelle, which uses a higher-order classical logic.
I am very much in agreement with weston and delmoi that calling type theory an idea from computer science is really abusive to the intellectual history. (But I should just get used to this: whenever a philosopher has a good idea, after enough time passes, either the idea or the philosopher will be claimed by some other discipline.)
posted by Jonathan Livengood at 5:03 PM on March 16, 2013
One thing I would really like to understand better about formal verification and computer assisted proof is what conceptual trade-offs there are between using a system like coq, which uses a constructive (intuitionistic) logic, and a system like Isabelle, which uses a higher-order classical logic.
I am very much in agreement with weston and delmoi that calling type theory an idea from computer science is really abusive to the intellectual history. (But I should just get used to this: whenever a philosopher has a good idea, after enough time passes, either the idea or the philosopher will be claimed by some other discipline.)
posted by Jonathan Livengood at 5:03 PM on March 16, 2013
It seems worth pointing out that there are parts of math where open source software dominates. I mean, I sort of assume that someone somewhere wrote a Mathematica package to do what, say, Macaulay2 does, but I've never heard it mentioned.
It's also kind of astonishing how far Sage has come and how fast it develops. Some of this is that the low-hanging fruit is being finished. For whatever reason, my department's computer people have not made updating Sage a priority and are seven or eight versions behind (in Sage time, that's a year, basically), so my code doesn't work because I use something or other that didn't exist in Sage a year ago. So there's this self-perpetuating cycle of the people who use Sage don't use the department's computers because it's hopelessly out of date (plus I need an optional package), so we get used to not using the department's computers, so we don't complain to the computer people, who then conclude (quite reasonably) that the Sage installation is perfectly acceptable.
posted by hoyland at 5:07 PM on March 16, 2013
It's also kind of astonishing how far Sage has come and how fast it develops. Some of this is that the low-hanging fruit is being finished. For whatever reason, my department's computer people have not made updating Sage a priority and are seven or eight versions behind (in Sage time, that's a year, basically), so my code doesn't work because I use something or other that didn't exist in Sage a year ago. So there's this self-perpetuating cycle of the people who use Sage don't use the department's computers because it's hopelessly out of date (plus I need an optional package), so we get used to not using the department's computers, so we don't complain to the computer people, who then conclude (quite reasonably) that the Sage installation is perfectly acceptable.
posted by hoyland at 5:07 PM on March 16, 2013
Also to note: some proof assistants, notably Coq, are really, really hard to use, in the sense that it's extremely hard to get them to prove what you want to prove; it takes a long time to get good at it, and even then you can get stuck. That's partly why there are so few programming projects that use them for formal verification.
posted by Monday, stony Monday at 5:09 PM on March 16, 2013
posted by Monday, stony Monday at 5:09 PM on March 16, 2013
Voevodsky gave a short talk on this in 2006 at Princeton's Institute for Advanced Study. The video is available here. He starts out by arguing that the usual formalism for mathematics is deficient, because it represents mathematical objects in ways that obscure the relations and equivalences between them. Then he starts to sketch out what you'd need in a formalism in order to avoid that deficiency, and ends up with (a certain flavour of) type theory. A nice feature of this approach is that you can very naturally model it in a geometrical kind of way, with points, paths, regions etc. -- hence the name "homotopy type theory".
posted by logopetria at 5:11 PM on March 16, 2013
posted by logopetria at 5:11 PM on March 16, 2013
logpetria, is this going to be easier for logicians to understand, for homotopy theorists to understand, or do you kind of have to be both? (I'm neither, but closer to the latter than the former.)
posted by escabeche at 5:45 PM on March 16, 2013
posted by escabeche at 5:45 PM on March 16, 2013
I'm at 15 minutes, and he says Σi (from i=1 to n) = n(n+1)/2 is going to be the most complicated formula in the talk. It's intended for non mathematicians.
posted by Monday, stony Monday at 5:51 PM on March 16, 2013
posted by Monday, stony Monday at 5:51 PM on March 16, 2013
I think I have more trepidation than excitement about software proofs. These proof languages are just another way for us to precisely specify something that is not exactly what we intended, the same problem that plagues "regular" programming languages.
posted by jepler at 6:00 PM on March 16, 2013
posted by jepler at 6:00 PM on March 16, 2013
Yeah, the 2006 talk I linked to is not so much a tutorial on HTT as a teaser-trailer for it. For a more detailed introduction, take a look at the 2010 talk linked in kliuless's original post at "univalent foundations of mathematics".
posted by logopetria at 6:01 PM on March 16, 2013
posted by logopetria at 6:01 PM on March 16, 2013
Three years ago, Vladimir Voevodsky, one of the organizers of a new program on the foundations of mathematics at the Institute for Advanced Study in Princeton, N.J., discovered that a formal logic system that was developed by computer scientists, called "type theory" could be used to re-create the entire mathematical universe from scratch.
Maybe I'm misunderstanding "re-create the entire mathematical universe from scratch" but didn't Godel show this to be impossible?
posted by vorpal bunny at 9:15 PM on March 16, 2013
Maybe I'm misunderstanding "re-create the entire mathematical universe from scratch" but didn't Godel show this to be impossible?
posted by vorpal bunny at 9:15 PM on March 16, 2013
hoyland: I can absolutely relate to that situation. I think one needs to just set up a cron job to do a "sage --upgrade" once a month or so....
posted by kaibutsu at 12:59 AM on March 17, 2013
posted by kaibutsu at 12:59 AM on March 17, 2013
Why not just use the well know axiom systems such as the Peano axioms for numbers, ZFC for set theory, and so on? Russell and friends put a axiomatic foundation to much of mathematics. I worked as part of a team on a theorem prover around 15 years ago that was being used to formally prove safety and required properties of models of software. Within that prover we had pretty much any axiom system you could think of from first order logic. I'd guess they are using the type theory stuff just because the software was available and already well developed by the computer scientists.
But anyway, there are two problems with automated proof. Firstly, to prove anything interesting, you run into a combinatorial explosion problem of there being a large number of axioms (or previously proved propositions) that can be applied to transform the current mathematical sentence into another (which is hopefully "closer" to the statement you are trying to get to). It is much much worse than, say, possible moves in chess. Tailoring the system to solve very particular types of problems can work well, and that's what we did for parts of the software property proofs, but they lack generality. It has always been something of a mystery to me how mathematicians know to go in the right "direction."
The second problem is deciding what is interesting, what should be proved? Just by repeatedly applying valid axioms/transformation rules starting from axioms or previously proven results you can generate endless new results. But most of them will be really boring or not of much use. An infinitesimally small number will be fun stuff like Fermat's theorem. There will always be a place for mathematicians as the best ones have a great intuition for what is worth proving and is what is interesting.
posted by drnick at 1:54 AM on March 17, 2013 [2 favorites]
But anyway, there are two problems with automated proof. Firstly, to prove anything interesting, you run into a combinatorial explosion problem of there being a large number of axioms (or previously proved propositions) that can be applied to transform the current mathematical sentence into another (which is hopefully "closer" to the statement you are trying to get to). It is much much worse than, say, possible moves in chess. Tailoring the system to solve very particular types of problems can work well, and that's what we did for parts of the software property proofs, but they lack generality. It has always been something of a mystery to me how mathematicians know to go in the right "direction."
The second problem is deciding what is interesting, what should be proved? Just by repeatedly applying valid axioms/transformation rules starting from axioms or previously proven results you can generate endless new results. But most of them will be really boring or not of much use. An infinitesimally small number will be fun stuff like Fermat's theorem. There will always be a place for mathematicians as the best ones have a great intuition for what is worth proving and is what is interesting.
posted by drnick at 1:54 AM on March 17, 2013 [2 favorites]
vorpal bunny: I think the point is that the usual foundation in ZFC can be re-created in homotopy type theory. So the claim isn't that everything that's true can be proved, which would contradict Gödel, but that everything we can normally construct or prove from the usual set-theoretic foundations can be constructed or proved here as well. So you don't lose anything by moving to this new setting, but you might gain something in ease of use, for example.
drnick: The automated proof systems discussed here are generally intended as assistants to human mathematicians, helping us either to find proofs of theorems we've come up with, or (as in the case of systems like Coq, and Hales' system discussed in the main article) to verify rigorously that an apparent proof that we've come up with is genuinely valid. So they're more like very sophisticated spell-checkers for logic, rather than automated authors.
As to your question ("Why not just use the well know axiom systems?"), Voevodsky addresses this at about minute 16-18 of the video I linked to above: "When mathematics is translated into the formal language, objects which we intuitively perceive as being 'equivalent' get completely different encodings." So for example, it's entirely obvious that the sum of the integers i from 1 to n is the same as the sum of the natural numbers i from 1 to n -- too obvious even to be worth stating, you might think. But "from the Zermelo–Fraenkel point of view, natural numbers and integers are two extremely different beasts. So there, this obvious equivalence becomes something that has to be proved, and it will be ugly and long." Now, you could probably modify the formalisation to fix that encoding problem in this specific case, but it's endemic to the whole formalisation, so you can't solve it in general. Homotopy type theory is designed to avoid that encoding problem altogether.
posted by logopetria at 2:36 AM on March 17, 2013 [1 favorite]
drnick: The automated proof systems discussed here are generally intended as assistants to human mathematicians, helping us either to find proofs of theorems we've come up with, or (as in the case of systems like Coq, and Hales' system discussed in the main article) to verify rigorously that an apparent proof that we've come up with is genuinely valid. So they're more like very sophisticated spell-checkers for logic, rather than automated authors.
As to your question ("Why not just use the well know axiom systems?"), Voevodsky addresses this at about minute 16-18 of the video I linked to above: "When mathematics is translated into the formal language, objects which we intuitively perceive as being 'equivalent' get completely different encodings." So for example, it's entirely obvious that the sum of the integers i from 1 to n is the same as the sum of the natural numbers i from 1 to n -- too obvious even to be worth stating, you might think. But "from the Zermelo–Fraenkel point of view, natural numbers and integers are two extremely different beasts. So there, this obvious equivalence becomes something that has to be proved, and it will be ugly and long." Now, you could probably modify the formalisation to fix that encoding problem in this specific case, but it's endemic to the whole formalisation, so you can't solve it in general. Homotopy type theory is designed to avoid that encoding problem altogether.
posted by logopetria at 2:36 AM on March 17, 2013 [1 favorite]
By the way, if someone does decide to write something to make proofs in Coq more humanly-readable, I'd like to suggest that it be called "CHCK", as a pun on "check" (i.e. what it does to proofs) and "chick" (a friendlier version of Coq).
And also, I suppose, "Chuck", if you program it to be able to roundhouse-kick a theorem into submission.
posted by logopetria at 2:44 AM on March 17, 2013
And also, I suppose, "Chuck", if you program it to be able to roundhouse-kick a theorem into submission.
posted by logopetria at 2:44 AM on March 17, 2013
Maybe I'm misunderstanding "re-create the entire mathematical universe from scratch" but didn't Godel show this to be impossible?He showed you can write un-answerable questions, and that therefore, not every question can be answered.
But after 80 years of computer science, it's obvious that's the case - we can all think of an example easily: "Will this program halt with this input?" - remember: a program is just a long binary string, which is the same thing as a number, so asking if a program can halt is just asking about a property of a particular number.
But what we can do is just accept that, and then go about trying to find all the answerable questions we can, using computers to make that process go faster.
posted by delmoi at 7:13 AM on March 17, 2013
He showed you can write un-answerable questions, and that therefore, not every question can be answered.
But after 80 years of computer science, it's obvious that's the case - we can all think of an example easily: "Will this program halt with this input?"
Not quite. You've confused two different senses of the word "un-answerable". Gödel worked on the idea of logically independent statements: given a set of axioms for a mathematical theory, does there exist a statement that can neither be proven true nor false in that theory? Your example, the Halting Problem, is an algorithmically undecidable problem: given a family of yes/no questions, does there exist an algorithm that will give you the correct answer to any question in the family?
Here's a few examples that show the difference more clearly. One well-known statement that is independent of the ZFC axioms is the so-called continuum hypothesis:
Another example of logical independence is the parallel postulate:
An example of an algorithmically undecidable question is Hilbert's tenth problem:
This is what happens with the Halting Problem. Here's one formulation of the problem:
As for the historical significance of these two notions: back around 1900, David Hilbert formulated an ambitious program, namely to put all of mathematics on an formal axiomatic basis, in such a way that it should be possible to express any mathematical statement in the language of these axioms, and to determine whether that statement is true or false. Gödel's work shows that this cannot be done. To be precise, Gödel showed that given any system of axioms that are powerful enough to express integer arithmetic, there will be statements that cannot be proven either true or false (unless the system is inconsistent, i.e. there is a statement that can be proven both true and false). He further showed that among these logically independent statements will be the statement that claims that the axioms are mutually consistent. The idea of algorithmically undecidable problems is in some ways a generalisation of these results: not only are there statements that cannot be proven either true or false, but even if we restrict us to questions where we know that no such statements exist, there might still not be an algorithmic way to obtain the correct answer for any particular question.
posted by Tau Wedel at 10:36 AM on March 17, 2013 [3 favorites]
But after 80 years of computer science, it's obvious that's the case - we can all think of an example easily: "Will this program halt with this input?"
Not quite. You've confused two different senses of the word "un-answerable". Gödel worked on the idea of logically independent statements: given a set of axioms for a mathematical theory, does there exist a statement that can neither be proven true nor false in that theory? Your example, the Halting Problem, is an algorithmically undecidable problem: given a family of yes/no questions, does there exist an algorithm that will give you the correct answer to any question in the family?
Here's a few examples that show the difference more clearly. One well-known statement that is independent of the ZFC axioms is the so-called continuum hypothesis:
There does not exist a set that is larger than the set integers, but smaller than the set of real numbers.Note that this is a singular statement: it is either true or false. But based only on the axioms of ZFC, neither possibility can be proven.
Another example of logical independence is the parallel postulate:
Given any line L and any point P that does not lie on L, there exists exactly one line through P that does not intersect L.This is again a statement that can be either true or false. Euclid famously listed it as an axiom of planar geometry, and he was right to do so: it is independent of the system of axioms formed by all the other axioms he listed. In fact, if you negate the parallel postulate and keep all of Euclid's other axioms, you get a perfectly reasonable logical system describing the so-called hyperbolic planar geometry.
An example of an algorithmically undecidable question is Hilbert's tenth problem:
Given a polynomial equation with integer coefficients, does there exist a solution to the equation where all variables take integer values?This is not a single statement, but a family of statements, here reformulated as questions. If you look at any particular polynomial equation, you get a single statement that may be true or false (either a solution exists, or no solution exists), and it may be quite easy to determine the correct answer in many of these cases. But the issue is that the entire family is so large, in some sense, that no algorithm can encompass all the questions it contains, and provide the correct answer to each and every one of them. This may happen even if every single instance of the question does have a definite yes/no answer.
This is what happens with the Halting Problem. Here's one formulation of the problem:
Given a program that runs on a particular Turing machine (taking no input), will the program halt eventually?Each and every instance of this question has a definite answer: either the program will halt, or it will not. But you cannot put together an algorithm that will examine any program and tell you whether it will halt. The question is just too general.
As for the historical significance of these two notions: back around 1900, David Hilbert formulated an ambitious program, namely to put all of mathematics on an formal axiomatic basis, in such a way that it should be possible to express any mathematical statement in the language of these axioms, and to determine whether that statement is true or false. Gödel's work shows that this cannot be done. To be precise, Gödel showed that given any system of axioms that are powerful enough to express integer arithmetic, there will be statements that cannot be proven either true or false (unless the system is inconsistent, i.e. there is a statement that can be proven both true and false). He further showed that among these logically independent statements will be the statement that claims that the axioms are mutually consistent. The idea of algorithmically undecidable problems is in some ways a generalisation of these results: not only are there statements that cannot be proven either true or false, but even if we restrict us to questions where we know that no such statements exist, there might still not be an algorithmic way to obtain the correct answer for any particular question.
posted by Tau Wedel at 10:36 AM on March 17, 2013 [3 favorites]
Each and every instance of this question has a definite answer: either the program will halt, or it will not. But you cannot put together an algorithm that will examine any program and tell you whether it will halt. The question is just too general.Well, that's not exactly correct. You actually cannot tell whether or not certain specific programs halt or not. You can say the answer "exists" but we just don't know it, but there are programs that we will never be able to know if they halt or not.
You can put an algorithm together that will tell you if a program halts or not, it's quite simple: you just emulate the program and if it halts, return true. (you can also return false if you detect a loop condition) - but the problem is pretty obvious: your program now takes just as long to halt as the original one would have, and may never do so.
And in fact, that algorithm would be an example of a specific program that we will never be able to know if it halts or not, given a specific input.
But the issue is that the entire family is so large, in some sense, that no algorithm can encompass all the questions it contains, and provide the correct answer to each and every one of them.Well, looking at the wikipedia article it's actually stated as:
Given a Diophantine equation with any number of unknown quantities and with rational integral numerical coefficients: To devise a process according to which it can be determined in a finite number of operations whether the equation is solvable in rational integers.Which is a little different. You can write an algorithm that would continue to search until it found a solution to every polynomial, but the problem is you wouldn't be able to say if that program will halt, so it doesn't meet Hilbert's criteria of taking a finite number of steps.
And in fact, if you had an Oracle Turing machine that could solve the halting problem, you could also solve the 10th problem by putting your potentially non-halting integer polynomial solver into it.
What's strange is that it seems like the reverse might also be true, according to Matiyasevich's theorem "Every computably enumerable set is Diophantine." - which would mean that for every program that halts there is a polynomial with an integer solution. Which is interesting.
Also the integer solution solver program, with a given input polynomial, would be another example of a program that you could not say whether or not it would halt.
posted by delmoi at 3:17 PM on March 19, 2013 [1 favorite]
Well, that's not exactly correct. You actually cannot tell whether or not certain specific programs halt or not. You can say the answer "exists" but we just don't know it, but there are programs that we will never be able to know if they halt or not.
[...]
Also the integer solution solver program, with a given input polynomial, would be another example of a program that you could not say whether or not it would halt.
For some input polynomials, you certainly can say whether the solver will halt. For instance, it will halt when given the equation x2 + y2 = 5, since that equation has the solution (x, y) = (1, 4), but it will not halt when given the equation x2 + y2 = 3. Even for equations where we don't know the answer, we can still be sure that there is an answer: either the program halts, or it does not. This is the case for any program and any input, even if we have no idea how to go about determining the answer. What I was trying to say was that this is a very different situation from Gödel's logically independent statements: it is not merely difficult to determine whether the continuum hypothesis holds within ZFC, it is impossible. You can assume that it's true, or you can assume that it's false; either way works equally well.
Which is a little different. You can write an algorithm that would continue to search until it found a solution to every polynomial, but the problem is you wouldn't be able to say if that program will halt, so it doesn't meet Hilbert's criteria of taking a finite number of steps.
You're right. Instead of "algorithm", I should have written "algorithm guaranteed to terminate" or something similar.
posted by Tau Wedel at 12:28 PM on March 20, 2013
[...]
Also the integer solution solver program, with a given input polynomial, would be another example of a program that you could not say whether or not it would halt.
For some input polynomials, you certainly can say whether the solver will halt. For instance, it will halt when given the equation x2 + y2 = 5, since that equation has the solution (x, y) = (1, 4), but it will not halt when given the equation x2 + y2 = 3. Even for equations where we don't know the answer, we can still be sure that there is an answer: either the program halts, or it does not. This is the case for any program and any input, even if we have no idea how to go about determining the answer. What I was trying to say was that this is a very different situation from Gödel's logically independent statements: it is not merely difficult to determine whether the continuum hypothesis holds within ZFC, it is impossible. You can assume that it's true, or you can assume that it's false; either way works equally well.
Which is a little different. You can write an algorithm that would continue to search until it found a solution to every polynomial, but the problem is you wouldn't be able to say if that program will halt, so it doesn't meet Hilbert's criteria of taking a finite number of steps.
You're right. Instead of "algorithm", I should have written "algorithm guaranteed to terminate" or something similar.
posted by Tau Wedel at 12:28 PM on March 20, 2013
For some input polynomials, you certainly can say whether the solver will halt.Er, What I should have said "a program that you could not say whether or not it would halt given a particular input" or you could combine the program and the input into one program (i.e. g() = f(x)) and say that you can't say the derived program will halt. "programs" and "(program,input) tuples" are basically isomorphic so you end up using them interchangeably (just like how technically you should say "Turing machine" instead of "program", but no one bothers to do this :P)
Even for equations where we don't know the answer, we can still be sure that there is an answer: either the program halts, or it does not.And the same is true of polynomials. We know some programs halt, and we know some polynomials have solutions. We know some programs loop for ever (while(true){...}), and we know some polynomials have no integer solutions (x + x2 = 1).
However, some polynomials we do not know the answer to whether they have solutions, and for some programs we cannot say whether whether they will halt (or programs for a specific input) without running a program that can not say will halt (according to Matiyasevich's theorem).
That is to say, they are both recursively enumerable sets
What I was trying to say was that this is a very different situation from Gödel's logically independent statementsAre independence and incompleteness the same thing? Godel came out with the his incompleteness theorem (there are two of them) in 1931, and proved that you could not disprove the continuum hypothesis using ZFC in 1940. It wasn't until 1963 that someone proved it couldn't be proven either.
People have used the halting problem to prove Godel's incompleteness theorem, as well as using the Matiyasevich theorem about polynomials to do the same thing. The incompleteness theorem proved the independence of one particular thing in ZFC: namely it's own consistency.
Also, are independence and decidability actually that different? Given the Curry–Howard correspondence, don't you need a program that halts in order to have a proof? Or can a program that is not decidable serve as a proof of something?
posted by delmoi at 1:18 PM on March 20, 2013
However, some polynomials we do not know the answer to whether they have solutions, and for some programs we cannot say whether whether they will halt (or programs for a specific input) without running a program that can not say will halt
Yes, but not knowing the answer is different from there being no answer. If we pick out a polynomial for which the answer is currently unknown, we have no surefire way of determining the correct answer. But that by itself is not reason to suspect that the answer is independent of our axioms, which would mean that we could assume either the existence or nonexistence of a solution to that polynomial equation and not run into a contradiction either way.
Are independence and incompleteness the same thing?
Independence and incompleteness are two different ways of expressing the same phenomenon. Incompleteness applies to a system of axioms with an underlying language, while independence applies to particular statements in that language. By definition, the axiom system is incomplete if and only if there is a statement in the underlying language that is independent of the axioms.
You're quite right that Gödel didn't prove the independence of the continuum hypothesis from ZFC; he proved the independence of two other statements (which basically say "this statement cannot be proven in ZFC" and "the axioms of ZFC are consistent"). I only meant to use the continuum hypothesis as an example of the notion of independence that Gödel was working with. I like it more as an example than Gödel's theorems, since it's a fairly natural statement that you might run into even if you haven't been thinking about consistency and completeness issues.
Also, are independence and decidability actually that different?
They're different enough that either can occur without the other. Wikipedia has examples:
I'm not sure how the Curry-Howard correspondence relates to all this, as I've never familiarized myself with the correspondence.
posted by Tau Wedel at 12:49 PM on March 22, 2013
Yes, but not knowing the answer is different from there being no answer. If we pick out a polynomial for which the answer is currently unknown, we have no surefire way of determining the correct answer. But that by itself is not reason to suspect that the answer is independent of our axioms, which would mean that we could assume either the existence or nonexistence of a solution to that polynomial equation and not run into a contradiction either way.
Are independence and incompleteness the same thing?
Independence and incompleteness are two different ways of expressing the same phenomenon. Incompleteness applies to a system of axioms with an underlying language, while independence applies to particular statements in that language. By definition, the axiom system is incomplete if and only if there is a statement in the underlying language that is independent of the axioms.
You're quite right that Gödel didn't prove the independence of the continuum hypothesis from ZFC; he proved the independence of two other statements (which basically say "this statement cannot be proven in ZFC" and "the axioms of ZFC are consistent"). I only meant to use the continuum hypothesis as an example of the notion of independence that Gödel was working with. I like it more as an example than Gödel's theorems, since it's a fairly natural statement that you might run into even if you haven't been thinking about consistency and completeness issues.
Also, are independence and decidability actually that different?
They're different enough that either can occur without the other. Wikipedia has examples:
For example, the theory of algebraically closed fields is decidable but incomplete, whereas the set of all true first-order statements about nonnegative integers in the language with + and × is complete but undecidable.(That second example is called true arithmetic, by the way.) In practice, it seems that quite a lot of decision problems are shown to be undecidable by showing, essentially, that you can emulate a Turing machine within the bounds of that problem, thus relating it to the halting problem. Since the undecidability of the halting problem implies Gödel's incompleteness theorem, you do end up with a fairly close relationship.
I'm not sure how the Curry-Howard correspondence relates to all this, as I've never familiarized myself with the correspondence.
posted by Tau Wedel at 12:49 PM on March 22, 2013
« Older On Chicago Public Schools Censoring Persepolis's... | Canoeing down the Mississippi Newer »
This thread has been archived and is closed to new comments
I'd guess this refers to Church's typed calculi, but apparently Bertrand Russell started it in order to get around the paradox in naive set theory that bears his name.
posted by weston at 3:51 PM on March 16, 2013 [1 favorite]