The most stunning point of this paper is the paradoxical benefit of deliberately assuming some kind of "blindness" in one's own thinking as a mathematician: in reckoning algebraically we proceed eyes closed so to say. What we look at is neither the real world nor our own mind but abstract signs on paper. That is the algebraic, formal", "symbolic" way of thinking.
Atiyah has this tradition start with Leibniz, and it marks exactly his opposition to Newton, the latter being mainly interested in physics and therefore restraining math by its grounding in the real world, whereas Leibniz would have understood the formal nature of the discipline. The antagonism re-emerges in the 20th century with Poincaré-Arnold on one side and Hilbert-Bourbaki on the other.
The point has been aptly made in the polemics of Brouwer against Hilbertian formalism, by saying that for the formalist mathematical exactness is basically grounded in paper: "Op de vraag, waar die wiskundige exactheid dan wel bestaat, antwoorden beide partijen verschillend; de intuitionist zegt: In het menschelijk intellect, de formalist: Op het papier", see Hermann Weyl, Philosophie der Mathematik und Naturwissenschaft, 1927, p.49.
I guess a very large majority of people would still think that math is the rational, systematic account of what is ("real world"), but Atiyah seems to say that from an inner-mathematical perspective, the purely formal conception of mathematics prevailed. Algebra was the "Faustian offer" handed over to mathematicians: in exchange for the formidable machine of symbolic reasoning, we would have to sacrifice the meaning of what we are dealing with, at leat temporarily.
jfarmer 34 days ago [-]
“It is a profoundly erroneous truism, repeated by all copy-books and by eminent people when they are making speeches, that we should cultivate the habit of thinking of what we are doing. The precise opposite is the case. Civilization advances by extending the number of important operations which we can perform without thinking about them. Operations of thought are like cavalry charges in a battle — they are strictly limited in number, they require fresh horses, and must only be made at decisive moments.”
— Alfred North Whitehead, "Introduction to Mathematics" (1911)
pmdulaney 34 days ago [-]
Part of what I take Whitehead to be saying is that the act of truly thinking is difficult to the point of being psychologically painful. And I believe that fear of this pain is at the root of procrastination in the realm of academic work.
The American Buddhist Cory Muscara has written: Procrastination is the refusal or inability to be with difficult emotions.
andrewflnr 33 days ago [-]
This view presupposes that things "we can perform without thinking about them" are done correctly. We don't get there without thinking about them first. Doing things without thinking about them is, at the level of each individual, a luxury we earn by thinking about them really hard at first. At the scale of society, well, this is supposedly what school is for. But for the society to "not think about" things, individuals have to continue thinking about them.
A lot of people don't get that far for a lot of tasks, so "think more" is not incorrect advice for them.
ccppurcell 34 days ago [-]
Well to extend the analogy, I close my eyes when I'm listening to music. Even more so when I'm playing an instrument. And when I'm reading I can't be listening to music, at least not actively and nothing too interesting.
I think we move back and forth to improve our understanding. The other day I read an article about Handel's messiah. I then went and listened to it.
I recently found a family of tilings of the sphere by drawing pictures. To convince myself that the tilings really exist and I wasn't just tricking myself, I calculated the angles at the vertices (I used a computer for this). That in turn told me something about the relative size of the tiles. And so it goes.
jll29 34 days ago [-]
If you want to call it "deliberate blindness", I'll call it "focus".
rramadass 34 days ago [-]
> I guess a very large majority of people would still think that math is the rational, systematic account of what is ("real world"), but Atiyah seems to say that from an inner-mathematical perspective, the purely formal conception of mathematics prevailed. Algebra was the "Faustian offer" handed over to mathematicians: in exchange for the formidable machine of symbolic reasoning, we would have to sacrifice the meaning of what we are dealing with, at least temporarily.
I am not sure that this has been a "good thing" for modern mathematics. While symbolic logic is definitely a necessity, it has been carried too far in as much as most folks are unable/find-it-difficult to model "real world" phenomena. Abstraction proceeds from the concrete to the general but if one loses sight of this link all symbolic manipulation is mere playing games without any understanding.
The scheme of construction of a mathematical theory is exactly the same as that in any other natural science. First we consider some objects and make some observations in special cases. Then we try and find the limits of application of our observations, look for counter-examples which would prevent unjustified extension of our observations onto a too wide range of events.
As a result we formulate the empirical discovery that we made as clearly as possible. After this there comes the difficult period of checking as to how reliable are the conclusions .
At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be "absolutely" correct and are accepted as "axioms". The sense of this "absoluteness" lies precisely in the fact that we allow ourselves to use these "facts" according to the rules of formal logic, in the process declaring as "theorems" all that we can derive from them.
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions ("proofs"), the less reliable is the final result.
The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called "the inconceivable effectiveness of mathematics in natural sciences" (or "the Wigner principle").
"The subtle poison of mathematical education" (in F. Klein's words) for a physicist consists precisely in that the absolutised model separates from the reality and is no longer compared with it.
nor discussing the danger of fetishising theorems are to be met in modern mathematical textbooks, even in the better ones. I even got the impression that scholastic mathematicians (who have little knowledge of physics) believe in the principal difference of the axiomatic mathematics from modelling which is common in natural science and which always requires the subsequent control of deductions by an experiment.
Attempts to create "pure" deductive-axiomatic mathematics have led to the rejection of the scheme used in physics (observation - model - investigation of the model - conclusions - testing by observations) and its substitution by the scheme: definition - theorem - proof. It is impossible to understand an unmotivated definition but this does not stop the criminal algebraists-axiomatisators.
webnrrd2k 33 days ago [-]
This whole argument reminds me a bit of the book, Godel-Escher-Back, which is (partially) about the interplay of formal and informal systems. Both are necessary, and one without the other tend to be useless or sterile.
Sometimes, when I've asked for an reason/explanation about some physics problem, I get a response like: "because of this equation". And in my brain there is a mismatch, were it doesn't feel at all like an actual explanation, because abstract symbol manipulation feels like, well, symbol manipulation.
But I think that's the wrong interpretation, because someone who has been really deep in mathematics for a long time feels the math at a deeper level than I do. For example, it's like explaining something that's true because F=MA. Before I had any physics knowledge someone answering a question with "It's because F=MA" would have felt off, in the same way. After doing a bunch of problems then F=MA is a perfectly fine answer. They are just a lot better at getting meaning from the symbols and following the manipulations.
rramadass 33 days ago [-]
The essay is not about formal vs. informal systems. In fact your example illustrates the very point that Arnold is making. In "F=ma" the symbols actually map-to something in the real world. The "domain of discourse/interpretation" for the symbols is well known and there is no confusion on what the equation stands for.
This article, Why “F = ma” is the most important equation in physics explains the nuances of the above equation - https://bigthink.com/starts-with-a-bang/most-important-equat... As the article points out it might be tempting to think of the above as a special case of the equation of a straight line (i.e. y=mx+c) but that would not be correct.
Symbolic logic is concerned only with the "forms" of the argument and inference (hence the name "Formal" system) and not their meaning/context. It is the latter which brings mathematics "alive" else as the GEB book shows we are just playing games.
photonthug 34 days ago [-]
I’ve heard of math-envy for physicists or computer scientists but physics-envy where mathematicians get riled about criminal axiomatizers seems .. less common.
I don’t necessarily accept the premise that what’s described as Definition-theorem-proof is a hugely different from observe-model-test, but if I did? Why not both? We only have a few ways to manufacture new high-quality knowledge, and I’m not sure there is a point to arguing about the “main” or best one.
And while it might be impossible to “understand” an unmotivated /ungrounded definition, what is really required is to entertain it long enough to see where it leads. Some gardens will bloom and some will not, based on consistency and richness and that’s how you know if the definition is good. One might call that “testing by observation” also, eh?
rramadass 33 days ago [-]
You need both to do mathematics. But for teaching/understanding one should start with observe-model-test (concrete concepts/things and their relations) leading on to definition-theorem-proof (symbolic abstraction and manipulation). While the approaches are very different there is a synergy which is what motivates and grounds the latter in reality.
practal 34 days ago [-]
Lots of interesting thoughts and insights in that article. I find it especially interesting to relate Geometry to Space, and Algebra to Time.
enriquto 34 days ago [-]
intriguing that the word "spacetime" does not appear in this text. For sure it wouldn't be related to algebraic geometry!
Jun8 34 days ago [-]
This lecture achieves the astonishing feat of being very accessible to non-mathematicians while being deeply insightful. The global vs local distinction mentioned, although not the same, brought to mind Dyson’s distinction between “birds” and “frogs”: https://www.ams.org/notices/200902/rtx090200212p.pdf
practal 33 days ago [-]
Love the von Neumann "aufgewärmte Suppe" anecdote.
Receiving the RSE Fellowship with a handshake from him was one of my highlights during my beautiful time in Edinburgh. May he rest in peace.
zyklu5 33 days ago [-]
Atiyah is truly one of the giants of modern mathematics. I remember long ago I struggled through a reading course of his and Bott's Yang-Mills paper in graduate school. Like many great works of math it too had that paradoxical characteristic of transforming seemingly 'non-mathematics' into mathematics* by reversing the usual direction of application of one to the other, in this case, from physics to math. It would start a whole movement that'll produce much of modern geometries greatest hits like Donaldson's (his student) theorem in 4 manifolds to Witten's great papers.
* A reason I think modern LLM architecture as they currently stand with their underlying attention mechanisms will not produce interesting new mathematics. A few other ideas are going to be needed.
bluenose69 34 days ago [-]
An RIS entry:
TY - JOUR
TI - Mathematics in the 20th century
AU - Atiyah, Michael
T2 - Bulletin of the London Mathematical Society
AB - A survey is given of several key themes that have characterised mathematics in the 20th century. The impact of physics is also discussed, and some speculations are made about possible developments in the 21st century.
DA - 2002/01//
PY - 2002
DO - 10.1112/S0024609301008566
DP - DOI.org (Crossref)
VL - 34
IS - 1
SP - 1
EP - 15
J2 - Bull. Lond. Math. Soc.
LA - en
SN - 0024-6093, 1469-2120
UR - https://www.cambridge.org/core/product/identifier/S0024609301008566/type/journal_article
Y2 - 2025/02/09/11:08:49
ER -
Atiyah has this tradition start with Leibniz, and it marks exactly his opposition to Newton, the latter being mainly interested in physics and therefore restraining math by its grounding in the real world, whereas Leibniz would have understood the formal nature of the discipline. The antagonism re-emerges in the 20th century with Poincaré-Arnold on one side and Hilbert-Bourbaki on the other.
The point has been aptly made in the polemics of Brouwer against Hilbertian formalism, by saying that for the formalist mathematical exactness is basically grounded in paper: "Op de vraag, waar die wiskundige exactheid dan wel bestaat, antwoorden beide partijen verschillend; de intuitionist zegt: In het menschelijk intellect, de formalist: Op het papier", see Hermann Weyl, Philosophie der Mathematik und Naturwissenschaft, 1927, p.49.
I guess a very large majority of people would still think that math is the rational, systematic account of what is ("real world"), but Atiyah seems to say that from an inner-mathematical perspective, the purely formal conception of mathematics prevailed. Algebra was the "Faustian offer" handed over to mathematicians: in exchange for the formidable machine of symbolic reasoning, we would have to sacrifice the meaning of what we are dealing with, at leat temporarily.
— Alfred North Whitehead, "Introduction to Mathematics" (1911)
The American Buddhist Cory Muscara has written: Procrastination is the refusal or inability to be with difficult emotions.
A lot of people don't get that far for a lot of tasks, so "think more" is not incorrect advice for them.
I think we move back and forth to improve our understanding. The other day I read an article about Handel's messiah. I then went and listened to it.
I recently found a family of tilings of the sphere by drawing pictures. To convince myself that the tilings really exist and I wasn't just tricking myself, I calculated the angles at the vertices (I used a computer for this). That in turn told me something about the relative size of the tiles. And so it goes.
I am not sure that this has been a "good thing" for modern mathematics. While symbolic logic is definitely a necessity, it has been carried too far in as much as most folks are unable/find-it-difficult to model "real world" phenomena. Abstraction proceeds from the concrete to the general but if one loses sight of this link all symbolic manipulation is mere playing games without any understanding.
V.I.Arnold in his essay On teaching Mathematics makes this very point - https://www.math.fsu.edu/~wxm/Arnold.htm
Excerpts:
The scheme of construction of a mathematical theory is exactly the same as that in any other natural science. First we consider some objects and make some observations in special cases. Then we try and find the limits of application of our observations, look for counter-examples which would prevent unjustified extension of our observations onto a too wide range of events.
As a result we formulate the empirical discovery that we made as clearly as possible. After this there comes the difficult period of checking as to how reliable are the conclusions .
At this point a special technique has been developed in mathematics. This technique, when applied to the real world, is sometimes useful, but can sometimes also lead to self-deception. This technique is called modelling. When constructing a model, the following idealisation is made: certain facts which are only known with a certain degree of probability or with a certain degree of accuracy, are considered to be "absolutely" correct and are accepted as "axioms". The sense of this "absoluteness" lies precisely in the fact that we allow ourselves to use these "facts" according to the rules of formal logic, in the process declaring as "theorems" all that we can derive from them.
It is obvious that in any real-life activity it is impossible to wholly rely on such deductions. The reason is at least that the parameters of the studied phenomena are never known absolutely exactly and a small change in parameters (for example, the initial conditions of a process) can totally change the result.
In exactly the same way a small change in axioms (of which we cannot be completely sure) is capable, generally speaking, of leading to completely different conclusions than those that are obtained from theorems which have been deduced from the accepted axioms. The longer and fancier is the chain of deductions ("proofs"), the less reliable is the final result.
The mathematical technique of modelling consists of ignoring this trouble and speaking about your deductive model in such a way as if it coincided with reality. The fact that this path, which is obviously incorrect from the point of view of natural science, often leads to useful results in physics is called "the inconceivable effectiveness of mathematics in natural sciences" (or "the Wigner principle").
"The subtle poison of mathematical education" (in F. Klein's words) for a physicist consists precisely in that the absolutised model separates from the reality and is no longer compared with it.
nor discussing the danger of fetishising theorems are to be met in modern mathematical textbooks, even in the better ones. I even got the impression that scholastic mathematicians (who have little knowledge of physics) believe in the principal difference of the axiomatic mathematics from modelling which is common in natural science and which always requires the subsequent control of deductions by an experiment.
Sometimes, when I've asked for an reason/explanation about some physics problem, I get a response like: "because of this equation". And in my brain there is a mismatch, were it doesn't feel at all like an actual explanation, because abstract symbol manipulation feels like, well, symbol manipulation.
But I think that's the wrong interpretation, because someone who has been really deep in mathematics for a long time feels the math at a deeper level than I do. For example, it's like explaining something that's true because F=MA. Before I had any physics knowledge someone answering a question with "It's because F=MA" would have felt off, in the same way. After doing a bunch of problems then F=MA is a perfectly fine answer. They are just a lot better at getting meaning from the symbols and following the manipulations.
This article, Why “F = ma” is the most important equation in physics explains the nuances of the above equation - https://bigthink.com/starts-with-a-bang/most-important-equat... As the article points out it might be tempting to think of the above as a special case of the equation of a straight line (i.e. y=mx+c) but that would not be correct.
Symbolic logic is concerned only with the "forms" of the argument and inference (hence the name "Formal" system) and not their meaning/context. It is the latter which brings mathematics "alive" else as the GEB book shows we are just playing games.
I don’t necessarily accept the premise that what’s described as Definition-theorem-proof is a hugely different from observe-model-test, but if I did? Why not both? We only have a few ways to manufacture new high-quality knowledge, and I’m not sure there is a point to arguing about the “main” or best one.
And while it might be impossible to “understand” an unmotivated /ungrounded definition, what is really required is to entertain it long enough to see where it leads. Some gardens will bloom and some will not, based on consistency and richness and that’s how you know if the definition is good. One might call that “testing by observation” also, eh?
Receiving the RSE Fellowship with a handshake from him was one of my highlights during my beautiful time in Edinburgh. May he rest in peace.
* A reason I think modern LLM architecture as they currently stand with their underlying attention mechanisms will not produce interesting new mathematics. A few other ideas are going to be needed.