The likelihood that there's interesting or important math is pretty high
October 16, 2015 9:34 AM Subscribe
Shinichi Mochizuki and the impenetrable proof - "Fesenko has studied Mochizuki's work in detail over the past year, visited him at RIMS again in the autumn of 2014 and says that he has now verified the proof. (The other three mathematicians who say they have corroborated it have also spent considerable time working alongside Mochizuki in Japan.) The overarching theme of inter-universal geometry, as Fesenko describes it, is that one must look at whole numbers in a different light — leaving addition aside and seeing the multiplication structure as something malleable and deformable. Standard multiplication would then be just one particular case of a family of structures, just as a circle is a special case of an ellipse." (previously: 1,2; via)
-UCLA mathematician Terence Tao has produced a solution to the Erdős discrepancy problem [1,2,3]
-What does it feel like to invent math?*
"Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space," [mefi's own] number theorist Jordan Ellenberg, of the University of Wisconsin–Madison, wrote on his blog a few days after the paper appeared.also btw...
-UCLA mathematician Terence Tao has produced a solution to the Erdős discrepancy problem [1,2,3]
-What does it feel like to invent math?*
The biggest problem with contemporary math is not that the proofs are long or really hard for even the best mathematicians to verify but just the very problem is virtually impossible to understand without being an expert in the field, let alone a man on the street.
(and by "contemporary" I mean anything serious posed as recently as any of us were born)
posted by sammyo at 9:58 AM on October 16, 2015 [2 favorites]
(and by "contemporary" I mean anything serious posed as recently as any of us were born)
posted by sammyo at 9:58 AM on October 16, 2015 [2 favorites]
The Erdős discrepancy problem mentioned above doesn't require any specialized knowledge to understand. Understanding the proof is a different matter (and proving it in the first place requires a level of intelligence so high that you need special notation merely to describe how smart you are).
posted by It's Never Lurgi at 10:03 AM on October 16, 2015 [2 favorites]
posted by It's Never Lurgi at 10:03 AM on October 16, 2015 [2 favorites]
Hoping for more insight into this, I read the comments on the first article. It's Nature, I thought, the comments will be informative. Don't make the mistake I did.
I choked on Category Theory, part of the reason why I decided not to go farther than undergrad in math. Reading about this makes me very very glad I did that. It sounds fascinating, but having to rewire your fundamental understanding of sets to do this sounds mentally painful.
Also, reading the wikipedia page, I now understand the freaking conjecture. Would it kill any of the reporters to actually give the stupid thing instead of dancing around it?
posted by Hactar at 10:04 AM on October 16, 2015 [3 favorites]
I choked on Category Theory, part of the reason why I decided not to go farther than undergrad in math. Reading about this makes me very very glad I did that. It sounds fascinating, but having to rewire your fundamental understanding of sets to do this sounds mentally painful.
Also, reading the wikipedia page, I now understand the freaking conjecture. Would it kill any of the reporters to actually give the stupid thing instead of dancing around it?
posted by Hactar at 10:04 AM on October 16, 2015 [3 favorites]
FTA: "he wrote that to understand his work, there was a 'need for researchers to deactivate the thought patterns that they have installed in their brains and taken for granted for so many years'".
To invent whole new patterns of thought, publish impenetrable proof, then be all "go figure out, geniuses" seems like a dick move and makes me question his entire proof and approach.
posted by Annika Cicada at 10:04 AM on October 16, 2015
To invent whole new patterns of thought, publish impenetrable proof, then be all "go figure out, geniuses" seems like a dick move and makes me question his entire proof and approach.
posted by Annika Cicada at 10:04 AM on October 16, 2015
The amount of effort that mathematicians put into reviewing and verifying their colleagues' work always impresses me. In my field, peer review sometimes feels perfunctory (or just straight-out hostile). Maybe I have a bit of grass-is-always-greener vision, but it seem like a truly productive, collaborative process in math.
posted by mr_roboto at 10:39 AM on October 16, 2015 [1 favorite]
posted by mr_roboto at 10:39 AM on October 16, 2015 [1 favorite]
Actually he's presented it to the open research community and been quite polite when an error was found, acknowledging and correcting. There's been one seminar and there's another at the Clay institute with is a pretty legit venue.
posted by sammyo at 10:42 AM on October 16, 2015 [1 favorite]
posted by sammyo at 10:42 AM on October 16, 2015 [1 favorite]
I'm not suggesting it's wrong, I just get a sense of him being a little too precious about this.
The quote from his professor here sums up how I feel:
"And that, says Faltings, is a problem. “It's not enough if you have a good idea: you also have to be able to explain it to others.” Faltings says that if Mochizuki wants his work to be accepted, then he should reach out more. “People have the right to be eccentric as much as they want to,” he says. “If he doesn't want to travel, he has no obligation. If he wants recognition, he has to compromise.”
posted by Annika Cicada at 10:49 AM on October 16, 2015
The quote from his professor here sums up how I feel:
"And that, says Faltings, is a problem. “It's not enough if you have a good idea: you also have to be able to explain it to others.” Faltings says that if Mochizuki wants his work to be accepted, then he should reach out more. “People have the right to be eccentric as much as they want to,” he says. “If he doesn't want to travel, he has no obligation. If he wants recognition, he has to compromise.”
posted by Annika Cicada at 10:49 AM on October 16, 2015
From the Nature article:
And I definitely do not mean this is a criticism of Mochizuki's ways. There might be something interesting to think about there, but I don't think it is his problem at all. It is societies problem that there isn't enough incentive to be a person who makes connections.
posted by Chuckles at 11:14 AM on October 16, 2015 [1 favorite]
Most mathematicians expect that it will take many more years to find some resolution. (Mochizuki has said that he has submitted his papers to a journal, where they are presumably still under review.) Eventually, researchers hope, someone will be willing not only to understand the work, but also to make it understandable to others — the problem is, few want to be that person.Most specialized fields are far too complacent about this. To me it is a fundamental problem in our society. In general, we need to encourage dissemination of knowledge at every level. That might mean, for example, a better structure of specialist instructors at the University level, like this Coward?
And I definitely do not mean this is a criticism of Mochizuki's ways. There might be something interesting to think about there, but I don't think it is his problem at all. It is societies problem that there isn't enough incentive to be a person who makes connections.
posted by Chuckles at 11:14 AM on October 16, 2015 [1 favorite]
I think it is wrong to fault Mochizuki for releasing this the way he did. This is pretty much the ideal of how math is supposed to be done. He spent decades wandering far off the usual paths, not really knowing if his musings would take him somewhere useful. When he completed the circle (which is a special case of an ellipse haha) he knew exactly how his fellow mathematicians would likely react. The thing is, it is really necessary for mathematical work to be verified for it to be verified without hand-holding by the original author, because that hand-holding might take you across a mistake or flaw you'd notice if you were doing the work for yourself. So I think Mochizuki did the right thing. He released it without fanfare, and let his colleagues decide whether it was crazy or worthwhile. Anything else would have led to even more accusations of egotism and delusions of grandeur.
The funny thing about math is that it often loops around in these paths through abstractions that look absolutely crazy and useless until they reconnect with the world in fantastically useful and non-obvious ways. If it holds up then Mochizuki's paper really is a paper from the future. But the future is just the place where we will find ourselves tomorrow.
posted by Bringer Tom at 11:14 AM on October 16, 2015 [20 favorites]
The funny thing about math is that it often loops around in these paths through abstractions that look absolutely crazy and useless until they reconnect with the world in fantastically useful and non-obvious ways. If it holds up then Mochizuki's paper really is a paper from the future. But the future is just the place where we will find ourselves tomorrow.
posted by Bringer Tom at 11:14 AM on October 16, 2015 [20 favorites]
I guess I have this idea that our problems have become sufficiently complicated enough to the point that multiple minds are needed to work on solutions in groups, and that other groups can then go work on them to peer-review. This seems to me to move the world forward together in leaps as opposed to one person doing something in a particular way where we "just hope" that maybe something more will come from it. That does not seem to be optimal in my opinion.
Maybe I just don't understand how math works and you have to have some person go off for years in isolation and come back with some great new revelation. It just seems so...17th century Newton to me?
( yeah yeah, I'm talking hive mind stuff lol, this is Metafilter though :-)
posted by Annika Cicada at 11:31 AM on October 16, 2015
Maybe I just don't understand how math works and you have to have some person go off for years in isolation and come back with some great new revelation. It just seems so...17th century Newton to me?
( yeah yeah, I'm talking hive mind stuff lol, this is Metafilter though :-)
posted by Annika Cicada at 11:31 AM on October 16, 2015
(fixed the century in my comment. moving along...)
posted by Annika Cicada at 11:34 AM on October 16, 2015
posted by Annika Cicada at 11:34 AM on October 16, 2015
Terrence Tao certainly doesn't think that that's how math works, and I think he's sufficiently accomplished in the field as to be worth listening to on that point. I agree with him that it's pretty silly to locate our ideals of how math should be done in this Romantic archetype of the lone hero.
posted by invitapriore at 11:43 AM on October 16, 2015 [2 favorites]
posted by invitapriore at 11:43 AM on October 16, 2015 [2 favorites]
Newton and Leibniz are the most obvious parallels, because if you look at calculus from the standpoint of any of the math that went before, around the time you get to "and now we're going to assume these slices get infinitely small" you can hear the ghost of Pythagoras calling for the men with butterfly nets. But if you follow that line of reasoning through the singularity of impossibe infinitely small slices you come out on the other side with elegant tools for describing the behavior of real world systems. It then develops that you can follow similar reasoning through other singularities to other useful results like differential equations and fourier transforms. But when Newton and Leibniz did it it was really the kind of departure from established mathematical thinking that a lot of people would have refused to even follow, because how the hell could it ever be useful for anything?
posted by Bringer Tom at 11:46 AM on October 16, 2015 [4 favorites]
posted by Bringer Tom at 11:46 AM on October 16, 2015 [4 favorites]
Terrence Tao notwithstanding, I seriously wonder what today's world would look like without lone heroes like Newton, Fourier, Tesla, Godel, Turing, or Shannon.
posted by Bringer Tom at 11:53 AM on October 16, 2015 [1 favorite]
posted by Bringer Tom at 11:53 AM on October 16, 2015 [1 favorite]
Well, for the time being then, as a layman, I don't see a way to distinguish between this and TimeCube.
I mean unless he can build a warp drive with it or something, it sounds like we're pretty much supposed to take his word that it's a brilliant and groundbreaking mathematical advance. That's a problem.
posted by Naberius at 12:21 PM on October 16, 2015 [1 favorite]
I mean unless he can build a warp drive with it or something, it sounds like we're pretty much supposed to take his word that it's a brilliant and groundbreaking mathematical advance. That's a problem.
posted by Naberius at 12:21 PM on October 16, 2015 [1 favorite]
Looking into the recent past I understand how we feel compelled to be enthralled by the "lone hero" myth, but like newtonian physics itself, that "solutions archetype" is a model that will be superseded by a more complete model as we scale our problem solving requirements beyond what a single "lone wolf" can reasonably solve in one lifetime with a single brain.
posted by Annika Cicada at 12:45 PM on October 16, 2015
posted by Annika Cicada at 12:45 PM on October 16, 2015
As a layman you couldn't understand 99% of published mathematical proofs anyway, so I don't see the significance.
posted by dilaudid at 12:47 PM on October 16, 2015 [7 favorites]
posted by dilaudid at 12:47 PM on October 16, 2015 [7 favorites]
The biggest problem with contemporary math is not that the proofs are long or really hard for even the best mathematicians to verify but just the very problem is virtually impossible to understand without being an expert in the field, let alone a man on the street.
(and by "contemporary" I mean anything serious posed as recently as any of us were born)
That's not entirely true, by the way. Both Graph Theory and Knot Theory contain unsolved and (potentially) important problems that are approachable by relative laymen.
posted by Xyanthilous P. Harrierstick at 1:08 PM on October 16, 2015 [2 favorites]
(and by "contemporary" I mean anything serious posed as recently as any of us were born)
That's not entirely true, by the way. Both Graph Theory and Knot Theory contain unsolved and (potentially) important problems that are approachable by relative laymen.
posted by Xyanthilous P. Harrierstick at 1:08 PM on October 16, 2015 [2 favorites]
Other people who, like me, are fans of math but who barely understand the questions addressed by contemporary mathematics, much less the answers, might enjoy reading Ted Chiang's Division by Zero. It's a good one. I assume it's won either a Hugo or a Nebula, because I think all but a couple of the things Chiang has written has won one or another of the major awards.
posted by You Can't Tip a Buick at 1:08 PM on October 16, 2015 [4 favorites]
posted by You Can't Tip a Buick at 1:08 PM on October 16, 2015 [4 favorites]
I try to keep up with the Numberphile channel on Youtube, which has a wide range of mathematicians talking about all sorts. Some of it is familiar territory, some of it just incomprehensible in various ways (why does this matter? How does it connect to anything? Or just, whut?), some of it is cool (like how to get Pi from fractals), but some of it really does shift my perceptions of what mathematics is and how to think about it.
In particular, Prof Carol Wood's stuff about infinities, infinitesimals and the number line (eg) made something go 'click' and added an extra dimension to my appreciation of how to consider such stuff in a way that doesn't hurt but puts them into part of a structure of number, a pattern of concepts, that also lets me see natural and real numbers as an abstraction which is part of a larger set of concepts. That's something I could give lip service to before, but not see.
So no chance of ever understanding most of modern mathematics, but now and again the mists clear a little.
posted by Devonian at 1:19 PM on October 16, 2015 [3 favorites]
In particular, Prof Carol Wood's stuff about infinities, infinitesimals and the number line (eg) made something go 'click' and added an extra dimension to my appreciation of how to consider such stuff in a way that doesn't hurt but puts them into part of a structure of number, a pattern of concepts, that also lets me see natural and real numbers as an abstraction which is part of a larger set of concepts. That's something I could give lip service to before, but not see.
So no chance of ever understanding most of modern mathematics, but now and again the mists clear a little.
posted by Devonian at 1:19 PM on October 16, 2015 [3 favorites]
Looking into the recent past I understand how we feel compelled to be enthralled by the "lone hero" myth, but like newtonian physics itself, that "solutions archetype" is a model that will be superseded by a more complete model as we scale our problem solving requirements beyond what a single "lone wolf" can reasonably solve in one lifetime with a single brain.
Thing is, how to distribute this kind of work over multiple brains is itself a very difficult and unsolved problem.
posted by atoxyl at 1:49 PM on October 16, 2015 [2 favorites]
Thing is, how to distribute this kind of work over multiple brains is itself a very difficult and unsolved problem.
posted by atoxyl at 1:49 PM on October 16, 2015 [2 favorites]
Like I kind of agree that this would be the next level but there are limits to how effectively it can be done without, you know, telepathy.
posted by atoxyl at 1:50 PM on October 16, 2015 [1 favorite]
posted by atoxyl at 1:50 PM on October 16, 2015 [1 favorite]
Thing is, how to distribute this kind of work over multiple brains is itself a very difficult and unsolved problem.
posted by atoxyl at 1:49 PM on October 16 [+] [!]
Clearly, this is why we need to figure out how to mash everyone's minds together into one massive world-spanning ultramind. Separated individual existences are for chumps.
posted by You Can't Tip a Buick at 2:41 PM on October 16, 2015 [5 favorites]
posted by atoxyl at 1:49 PM on October 16 [+] [!]
Clearly, this is why we need to figure out how to mash everyone's minds together into one massive world-spanning ultramind. Separated individual existences are for chumps.
posted by You Can't Tip a Buick at 2:41 PM on October 16, 2015 [5 favorites]
What would happen if maths was reported like sport, or vice versa?
posted by stanf at 2:52 PM on October 16, 2015
posted by stanf at 2:52 PM on October 16, 2015
maths was reported like sport, or vice versa?
probably we'd need to reach a transatlantic agreement on which words are plural and which singular before we could attempt such an experiment.
posted by You Can't Tip a Buick at 2:55 PM on October 16, 2015 [6 favorites]
probably we'd need to reach a transatlantic agreement on which words are plural and which singular before we could attempt such an experiment.
posted by You Can't Tip a Buick at 2:55 PM on October 16, 2015 [6 favorites]
What would happen if maths was reported like sport, or vice versa?
I wouldn't be able to tell the difference. They're both full of numbers I don't understand.
posted by The Hyacinth Girl at 2:56 PM on October 16, 2015 [3 favorites]
I wouldn't be able to tell the difference. They're both full of numbers I don't understand.
posted by The Hyacinth Girl at 2:56 PM on October 16, 2015 [3 favorites]
I guess I have this idea that our problems have become sufficiently complicated enough to the point that multiple minds are needed to work on solutions in groups, and that other groups can then go work on them to peer-review
In engineering, we sometimes call that design by committee.
posted by polymodus at 4:36 PM on October 16, 2015 [1 favorite]
In engineering, we sometimes call that design by committee.
posted by polymodus at 4:36 PM on October 16, 2015 [1 favorite]
The group mind thing can be useful, but in my experience in software development, it also comes with drawbacks. There are advantages and disadvantages to both modes of working. There's no need to be absolutist and choose "team collective" over "team lone genius"; there are major benefits to viewing the two working styles as complementary approaches.
posted by saulgoodman at 12:18 PM on October 17, 2015
posted by saulgoodman at 12:18 PM on October 17, 2015
For one thing, to coordinate working in groups requires introducing some inefficiency: a huge part of the working effort has to be invested in communicating, tracking work efforts so people aren't duplicating work or working at cross purposes, prioritizing work, and clearing up misunderstandings. Basically, collaborative work always requires investing a ton of extra time and effort in project management to manage and coordinate the efforts of the team or teams doing the work.
posted by saulgoodman at 12:24 PM on October 17, 2015 [1 favorite]
posted by saulgoodman at 12:24 PM on October 17, 2015 [1 favorite]
This fits into something I've been thinking about for a while. I was wondering why some simple assertions about maths are very easy to prove (e.g., Pythagorus' Theorem) while some others are very hard. Similarly, some questions are easy to answer ("what is A x B") while some are much harder, even when you might think they're closely related ("what are the factors of C"). Anyway, here's my answer. My apologies if it doesn't seem to make much sense:
Consider strings made up of every possible arrangement of characters, first ordered by size and then in some typographical order. By definition, this set of strings will include every possible mathematical statement: true, false, or merely gibberish. Among the true mathematical statements are the basic ones called axioms. Other mathematical statements can only be shown to be true if we can reach them by implication from those axioms, or by implication from ones implied by the axioms, or implication from implication and so forth. These chains of implication are what we call proofs.
There can be only so many brief proofs; as proofs increase in length there are more and more of them. But here's the thing: there isn't necessarily any connection between the length of a statement and the length of its briefest proof. So you might have a very brief statement that requires a very long proof. Worse, something might appear to be empirically true, but there may be no chain of implication that reaches it from our standard axioms!
I don't think this is just a restatement of Gödel's incompleteness theorems, by the way. I'm not saying that there necessarily are statements that can't be proved; I'm saying that (a) Mochizuki's proof of the abc conjecture may be the shortest one there is; (b) there may be other, additional axioms that would give a shorter path; but (c) we have no way of telling whether those axioms are true, other than empirically. That is, we settled on our existing axioms because they're sufficient for arithmetic, which is ultimately based on our empirical experience ("A stone and a stone makes two stones. A fish and a fish makes two fish.") We don't inherently know that they're true, other than by definition. If we add axioms based on our empirical experience of, say, the distribution of prime numbers, we won't actually know that those axioms and the proofs based on them are true; we'll just have defined them as true.
Oh, and we'll eventually run out of short proofs and in fact we may have already done so.
posted by Joe in Australia at 3:53 AM on October 18, 2015
Consider strings made up of every possible arrangement of characters, first ordered by size and then in some typographical order. By definition, this set of strings will include every possible mathematical statement: true, false, or merely gibberish. Among the true mathematical statements are the basic ones called axioms. Other mathematical statements can only be shown to be true if we can reach them by implication from those axioms, or by implication from ones implied by the axioms, or implication from implication and so forth. These chains of implication are what we call proofs.
There can be only so many brief proofs; as proofs increase in length there are more and more of them. But here's the thing: there isn't necessarily any connection between the length of a statement and the length of its briefest proof. So you might have a very brief statement that requires a very long proof. Worse, something might appear to be empirically true, but there may be no chain of implication that reaches it from our standard axioms!
I don't think this is just a restatement of Gödel's incompleteness theorems, by the way. I'm not saying that there necessarily are statements that can't be proved; I'm saying that (a) Mochizuki's proof of the abc conjecture may be the shortest one there is; (b) there may be other, additional axioms that would give a shorter path; but (c) we have no way of telling whether those axioms are true, other than empirically. That is, we settled on our existing axioms because they're sufficient for arithmetic, which is ultimately based on our empirical experience ("A stone and a stone makes two stones. A fish and a fish makes two fish.") We don't inherently know that they're true, other than by definition. If we add axioms based on our empirical experience of, say, the distribution of prime numbers, we won't actually know that those axioms and the proofs based on them are true; we'll just have defined them as true.
Oh, and we'll eventually run out of short proofs and in fact we may have already done so.
posted by Joe in Australia at 3:53 AM on October 18, 2015
check out gregory chaitin's paradoxes of randomness :P
posted by kliuless at 6:30 AM on October 18, 2015 [3 favorites]
...some mathematical facts are true for no reason, they are true by accident, or at random. In other words, God not only plays dice in physics, but even in pure mathematics, in logic, in the world of pure reason. Sometimes mathematical truth is completely random and has no structure or pattern that we will ever be able to understand. It is not the case that simple clear questions have simple clear answers, not even in the world of pure ideas, and much less so in the messy real world of everyday life.math contains multitudes (multiverses even!)
posted by kliuless at 6:30 AM on October 18, 2015 [3 favorites]
Yes, I think what I'm saying is basically equivalent to Chaikin, particularly in Randomness in Arithmetic.
Where I depart from him seems to be the relationship between maths and reality, but we come out at the same place: not all truths can be proven from a given set of axioms.
The key thing for me is, that our axioms aren't intrinsic to reality; they're just a formalisation of the system we use to describe the world. What I mean is this: we invented numbers to describe quantities of things - one rock, one fish, three sheep, and so forth. I suppose we originally had unrelated names for each individual quantity, but at some early point we invented a place-value system that can theoretically describe a quantity of any size. And that is where our troubles started.
The place value system implicitly assumes the existence of a successor function such that you can always add one to any number. In reality, you can't always add one to every group of rocks, fish, or sheep - there aren't that many of them! That is, the idea of "a number" isn't actually equivalent to "the quantity of a thing". It's an abstraction that can be used to describe a quantity, and even though you can do operations on numbers that match the way you can do operations on real objects ("divide this pile of eight rocks between four people" ≡ "8 ÷ 4") the converse isn't necessarily true.
So we have a system of manipulating the symbols that can represent real objects and we can formalise that system down to a set of axioms. But we have fallen into the error of thinking that our system can describe everything, even itself. Sometimes it can (e.g., "what is the sum of all integers between 1 and 1,000") but there is no assurance that it can, even for quite commonplace assertions ("there are an infinite number of prime pairs separated by two").
Really, the only thing math is pretty-well guaranteed to do is describe the quantities involved in physical operations.(*) The fact that we can manipulate mathematical symbols in interesting ways and predict the outcome is a credit to our ingenuity; it doesn't necessarily imply anything about the real world, or our ability to predict the outcome of manipulating those symbols in other ways.
(*) Leaving quantum stuff aside.
posted by Joe in Australia at 9:22 PM on October 18, 2015
Where I depart from him seems to be the relationship between maths and reality, but we come out at the same place: not all truths can be proven from a given set of axioms.
The key thing for me is, that our axioms aren't intrinsic to reality; they're just a formalisation of the system we use to describe the world. What I mean is this: we invented numbers to describe quantities of things - one rock, one fish, three sheep, and so forth. I suppose we originally had unrelated names for each individual quantity, but at some early point we invented a place-value system that can theoretically describe a quantity of any size. And that is where our troubles started.
The place value system implicitly assumes the existence of a successor function such that you can always add one to any number. In reality, you can't always add one to every group of rocks, fish, or sheep - there aren't that many of them! That is, the idea of "a number" isn't actually equivalent to "the quantity of a thing". It's an abstraction that can be used to describe a quantity, and even though you can do operations on numbers that match the way you can do operations on real objects ("divide this pile of eight rocks between four people" ≡ "8 ÷ 4") the converse isn't necessarily true.
So we have a system of manipulating the symbols that can represent real objects and we can formalise that system down to a set of axioms. But we have fallen into the error of thinking that our system can describe everything, even itself. Sometimes it can (e.g., "what is the sum of all integers between 1 and 1,000") but there is no assurance that it can, even for quite commonplace assertions ("there are an infinite number of prime pairs separated by two").
Really, the only thing math is pretty-well guaranteed to do is describe the quantities involved in physical operations.(*) The fact that we can manipulate mathematical symbols in interesting ways and predict the outcome is a credit to our ingenuity; it doesn't necessarily imply anything about the real world, or our ability to predict the outcome of manipulating those symbols in other ways.
(*) Leaving quantum stuff aside.
posted by Joe in Australia at 9:22 PM on October 18, 2015
« Older Hunting Witches With Walt Disney | Missing hiker Geraldine Largay's remains have been... Newer »
This thread has been archived and is closed to new comments
posted by chavenet at 9:38 AM on October 16, 2015 [1 favorite]