Why has there been such enormous resistance to the idea that there should be an account of rational belief in mathematics for propositions not known to be true or false?
I took this up in my chapter in Foundations of Bayesianism, and revised it as Chap. 5 of Towards a Philosophy of Real Mathematics, having noticed Polya’s neglected account in Mathematics and Plausible Reasoning. Only afterwards did I discover James Franklin had done some very useful work in ‘Non-Deductive Logic in Mathematics’, The British Journal for the Philosophy of Science, Vol. 38, No. 1 (Mar., 1987), pp. 1-18. He writes
Previous work on this topic has therefore been rare. There is one notable exception, the pair of books by the mathematician George Polya on Mathematics and Plausible Reasoning [1954]. Despite their excellence, they have been little noticed by mathematicians, and even less by philosophers. Undoubtedly this is largely because of Polya’s unfortunate choice of the word ‘plausible’ in his title–‘plausible’ has a subjective, psychological ring to it, so that the word is almost equivalent to ‘convincing’ or ‘rhetorically persuasive’. Arguments that happen to persuade, for psychological reasons, are rightly regarded as of little interest in mathematics and philosophy. Polya in fact made it clear, however, that he was not concerned with subjective impressions, but with what degree of belief was justified by the evidence ([1954] I p. 68).
Much more recently Barry Mazur has consider Polya’s ideas in Is it plausible?. It it interesting to see that he also attends to the importance of analogy. I devoted a whole chapter to this topic in Towards a Philosophy of Real Mathematics.
Some people reviewing my book (e.g., Julian Cole, Math. Rev. 2004f:00005, and Eduard Glas, Philosophia Mathematica XII: 65) have come away with an impression of my views on the value of Bayesian thinking for the philosophy of mathematics which I did not intend to give. I certainly don’t believe it tells us much about what I consider to be the key issues of the philosophy of mathematics, such as how to characterise the rationality of the continuing quest for more adequate notions of space, quantity, dimension, symmetry, etc. On the other hand, it’s a very useful exercise to think about issues concerning plausibility through its lens. I invite a reconstruction of the following:
…it is my view that before Thurston’s work on hyperbolic 3-manifolds and his formulation of the general Geometrization Conjecture there was no consensus amongst experts as to whether the Poincare Conjecture was true or false. After Thurston’s work, notwithstanding the fact that it has no direct bearing on the Poincare Conjecture, a consensus developed that the Poincare Conjecture (and the Geometrization Conjecture) were true. Paradoxically, subsuming the Poincare Conjecture into a broader conjecture and then giving evidence, independent from the Poincare Conjecture, for the broader conjecture led to a firmer belief in the Poincare Conjecture.(John W. Morgan, ‘Recent Progress on the Poincare Conjecture and the Classification of 3-Manifolds’, Bulletin of the American Mathematical Society 2004, 42(1): 57-78)
It doesn’t sound at all paradoxical to me, if you take Polya’s “hope for a common ground” into account, see chapter 5. Also, I think I showed in chapter 6 how to deal with the famous problem of old evidence, which has taxed philosophers of science over the years. One reviewer, Joseph Melia (Metascience Volume 13, Number 3, December 2004) couldn’t quite see it, so I’ll spell it out.
Let’s rehearse the argument. The problem runs: imagine you have a piece of firmly established evidence , which you believe with total confidence, . You devise some theory , and rate how likely it is to be true, . You then discover that accounts for . What should this do to your degree of belief in ? Well applying Bayes’ theorem:
Apparently, there should be no boost to your degree of belief in . This seems odd because scientists often do get encouraged when their theories turn out to explain some already observed phenomenon.
But perhaps we should be using a formula to reflect updating on our learning the accounts for :
After Polya, I suggested the analogy with someone doing some mathematics. Consider that you are set the task of finding a formula for the curved surface area of a frustrum of a cone (a cone with its nose chopped off), in terms of the height, , and the radii of the upper and lower circles, and . With your shaky and rudimentary calculus you arrive at the conjecture that the surface area is . You are moderately confident that this is correct, but you accept that you may have erred.
Let’s imagine you’re now 50% certain that your formula is correct. A friend now says to you, “How would you feel about your formula, if I told you that if you set , you’ll find your formula gives you the correct answer for the area of an annulus?”. You say, “Much more confident, thank you”, although you already know the area of an annulus.
The key point is that you’ve learned something new, that your formula works in a special case. So, let F = “formula is correct”, and A = “formula gives right result for annulus”. Then
Pr(A|F) = 1, but now . The first of the summands is 0.5, and the second your belief that the formula is wrong and yet still gives the right answer for the annulus, let’s say 0.1. So you update your belief in your formula to around 83%.
Further checks include:
After similar updating, the probability is now extremely high, apparently resulting by conditioning on further old evidence. Which checks give most support? Why?
Last revised on June 5, 2020 at 08:48:59. See the history of this page for a list of all contributions to it.