The mentioned size and density of Whitehead & Russel's Principia make the few dozen pages of Goedel's On Formally Undecidable Propositions of Principia Mathematica and Related Systems one of the greatest "i ain't reading all that/i'm happy for u tho/or sorry that happened" mathematical shitposts of all time.
Gödel had great respect for their work and was considered one of only a few people at the time to have read and understood the work. He wrote an entire paper later in life explaining he wouldn’t have come to his result without Principia because it showed him a base case to work from. Him and Russell would continue to meet and discuss logic well into the 50’s.
First, you’re arguing with someone making a joke, and second, yes, he understood the axioms very well and was able to use the system to prove his incompleteness theorems, but, citation needed on whether he actually read the full three volumes? Russell himself said that “I used to know of only six people who had read the later parts of the book. Three of these were Poles... The other three were Texans [Gödel was neither Polish, nor Texan]”
Thanks for sharing! I like to look at this example inside the debate of if mathematics are invented or discovered.
> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.
> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.
Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.
I'd say mathematics is discovered and definitions are invented. E.g. "ordered pair" is not part of set theory, it's an invented name we give to a convenient definition of a set schema.
Even base-N representations are an invention: S() and zero are all you need, but Roman Numerals were an improvement over base-1 representations and base-N is significantly more convenient to work with.
Be careful with making assumptions from modern, formalized set theory and the naive set theory.
The axiom schema of specification is added to avoid Russell's paradox.
A set in the naive meaning is just a well-defined collection of objects.
As ordered pairs are a binary relation, foundedness or order are operation dependant, and assuming an individual set is unordered is a useful assumption.
But IMHO it is problematic from a constructivist mathematics perspective. The ambiguity of a nieve set, especially when constricting the natural numbers, which are obvious totally ordered is a challenge to overcome.
I know the Principia was focused on successor sets, so mostly avoid it, but IMHO they would have hit it when trying to define an equally operation
If you remember membership and not elements define a set:
{a,b,c}=={a,b,c,b}=={c,b,b,a}
In a computing context, there were some protocols that may have been IBM specific that required duplicate members to be adjacent.
So while the first and the third sets would be equivalent, the second wouldn't be, so order mattered.
Most actual implementations just dropped the redundant elements, vs track membership, but I was just trying to provide an actual concrete example.
IIRC the axiom schema of specification is one of those that was folded into others in modern ZFC textbooks so it is easy to miss.
I'm not sure if I completely understand your point. Is it that the definitions of ordered pairs must be done carefully when talking about constructions in Principia because of its formulation in logical predicates, e.g. care was taken when constructing sets to avoid Russell's paradox explicitly given the axioms of logic rather than Russell's paradox being excluded in ZF by the axiom schema of specification?
Or is the difficulty in introducing a canonical order for the ordered pair, or introducing well/partial-ordering in sets themselves? I guess I see an ordered pair as more of an indexical definition than an ordering definition.
> What does it mean to arrange the elements of a set A in some order?
Also note how the earlier section on "Unordered Pairs" is more about building the axiom of pairing etc...to get to ordered pairs which gets to the Cartesian product, which outputs ordered pairs.
It doesn't matter if you go through Zermelo's theorem+Zorn, that states that every set can be well-ordered, or though Cartesian product's and/or AC. (Note: This is in FoL well-ordering and AC are the same, but not in SoL and HoL)
It is not that sets are expressly unordered, as a set of points in a line segment would very much have an order, but that you didn't actively arrange the elements in order to take advantage of properties that are useful to you.
Maybe I just hit mental blocks but IMHO it is important that when you make the assumption that "there exists a set." it is very important to realize that it is "unordered" because you haven't imposed one, but is not an innate property of an element of the set.
Hopefully that helps in addressing this from your original post.
> "ordered pair" is not part of set theory
While many creators of both naive and formal set theories may choose to not define (a,b) = {{a},{a,b}} explicitly, the output of the Cartesian product is the ordered pairs, so it doesn't matter, you don't have a useful set theory without them.
When we wrote simple mathematics on the Pioneer and Voyager probes I think it was under the assumption that anyone or anything else finding them would have co-discovered enough mathematics to recognize it on the plaques. That's the sense in which I use the word "discovered" for much of mathematics. Our definitions will differ from aliens but the foundations will be translatable.
On a side note, and since you mentioned Roman Numerals in your other comment, I would say that the representation of I, II and III being different from IV is related to how the human brain processes quantities up to 4 [0].
So it is a simple example showing that the way humans process language influences the representation/definition of mathematical ideas.
That was a lovely read, thank you. I particularly enjoyed the analogy between 'a poorly-written computer program' (i.e. one with a lot of duplication due to inadequate abstraction), and the importance of using the appropriate mathematical machinery to reduce the complexity/length of a proof. It brings the the Curry–Howard isomorphism to mind: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
It's easier if you start from something closer to Peano arithmetic or Boyer-Moore theory. I used to do a lot with constructive Boyer-Moore theory and their theorem prover. It starts with
(ZERO)
and numbers are
(ADD1 (ZERO))
(ADD1 (ADD1 (ZERO)))
etc. The prover really worked that way internally, as I found out when I input a theorem with numbers such as 65536 in it. I was working on proving some things about 16-bit machine arithmetic, and those big numbers pushed SRI International's DECSystem 2060 into thrashing.
Here's the prover building up basic number theory, one theorem at a time.[1]
This took about 45 minutes in 1981 and takes under a second now.
Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.
What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.
Sure, but that's it's own not-quite-markdown thing, which is extra annoying because it's just close enough that people think it is markdown and do things like writing code blocks with ```. IMHO it'd be much better to just actually do markdown, or at least a strict subset.
Yeah but could it even be changed at this point? I'd imagine that once the ball gets rolling, changing any kind of formatting rules for a site with over a decades worth of (hundreds of thousands, tens of millions? ) of posts would be pretttty hard to get past committee
I would strongly favor writing a script that went through the database and rewrites existing comments from the old to new syntax; I believe in this case that's doable. And you would want to message it ahead of time of course. But with those things done I think it'd work fine, especially because I suspect virtually anyone who's gotten used to the HN formatting codes is already familiar with real markdown so it'd be a relatively painless transition.
That's only very limited support of the most basic forms of formatting. It's the year 2024, and Hacker News can't do better? Even the blog post above, from 2006, uses a LaTeX plugin.
> The ⊢ symbol has not changed; it means that the formula to which it applies is asserted to be true. ⊃ is logical implication, and ≡ is logical equivalence.
A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.
I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.
The main point of the parent article is not about 1+1=2, but about the importance of the concept of ordered pair in mathematics and about how the introduction and use of this concept has simplified the demonstrations that were much too complicated before this.
While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.
The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.
The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.
The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.
The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.
However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.
What definition takes up fewer components in a digital circuit is a terrible reason. The whole point of math is we can reason about the most conceptually simple idea, rather than with engineering constraints. Sets existed before circuits! And before digital the only “hardware representation” was an analog voltage, which cannot easily represent a pair.
Also it’s not even true. There is no hardware representation for the ordered pair containing the earth and the moon. You now need a bit encoding of the information.
The distinctions of infinite constructions you mention are already well understood. See “recursively enumerable set”.
Ordered pairs are trivially definable in terms of sets. It’s a distinction which does
not change any of the foundational proofs and gives you no new insight. This is like arguing that bounded vs counted ranges are foundationally important. We can show they are equivalent in one paragraph and move on.
Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
> Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
No-one can stop you from using terms as you please and investigating their consequences, but, at least in modern mathematical parlance, a binary relation is the set of ordered pairs that are "related" by it. (Your relation would seem to be just a bare set, or perhaps a unary relation, not a binary relation which I think is what is often meant without default modifier.)
He is talking about the difference between intension and extension. The properties "creature with a heart" and "creature with kidneys" are different, even though they may have the same extension (if the set of creatures with a heart and the set of creatures with a kidney happen to be the same). This also applies to relations of arbitrary arity. In mathematics everything is usually treated as extensional, because all the mathematical objects, like numbers, exist "necessarily". This is not the case for other objects, where things could be the same (like the set of creatures with heart and the set of creatures with kidneys) but they aren't necessarily the same. It's possible that there is a creature with heart but without kidneys. Though even in mathematics, properties that define the same objects are often not trivially equivalent: they are necessarily equivalent, but it may take a complex proof to show that they are.
I often use the analogy "1+1=?" in debates with both friends and strangers, especially when discussing subjective topics like politics, religion, and geopolitical conflicts. It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.
None of these are "vastly different conclusions". None of these are starting from different premises. None of these are using different reasoning. You're literally just writing it differently. Okay, so? This is a pointless distinction that doesn't even apply in a verbal debate at all. It'd be like having a philosophical debate with someone and them suddenly saying "oh yeah, but what if we were arguing in Spanish!? Wouldn't that BLOW YOUR MIND!?" No? It has absolutely nothing to do with anything. I would be annoyed at you if you tried to use this in an argument with me.
> It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
But 1+1=10 and 1+1=2 are not different conclusions, they are precisely the same conclusions but with different representations.
A better example might be 9 vs 6 written on the parking floor: depending on where you're standing, you'll read the number differently (and yet one of the readings is wrong).
Actually, it is a metaphor for formulating a brand new branch of mathematics that fixes the identity principle and all the problems with the square root of two. But also, it is not a metaphor because show me any physical system where an action times an action does not equal a reaction.
It's actually super easy to form a "brand new branch of mathematics". Just start with some definitions and run with them. Although you'll almost certainly end up with something inconsistent. And if you don't, it'll almost certainly be not useful. And if it is useful, it'll almost certainly turn out to be the exact same math just wearing a costume.
There are no problems with the square root of two.
> show me any physical system where an action times an action does not equal a reaction.
Show me any gazzbok where a thrushbloom minus a grimblegork does not equal a fistelblush. Haha, you can't do it, can you!? I WIN!
That is to say: you're using silly made up definitions of "action" and "times" here.
> show me any physical system where an action times an action does not equal a reaction
Not quite sure what an action times an action is, but how about rotating a 2d shape 180 degrees? Do that twice and it's the same as not rotating it at all.
I know of 7 different ways to do 1+1 getting 5 different answers. I use most of them in my day to day work as a programmer. Most of the time 1+1=10 because as a programmer I work in binary.
Embedded work - not very low level, but I need to decode a lot of CAN network packets where the individual bits matter. Most of them time I use a hex representation, but that is because hex makes it really easy to figure out the binary going on underneath. Even when I'm doing normal math though it is important to remember that it is binary under it all and so overflow happens at numbers that make sense in binary terms.
“1 + 1 = 2” is only true in our imagination, according to logical deterministic rules we’ve created. But reality is, at its most fundamental level, probabilistic rather than deterministic.
Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.
I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.
The mentioned size and density of Whitehead & Russel's Principia make the few dozen pages of Goedel's On Formally Undecidable Propositions of Principia Mathematica and Related Systems one of the greatest "i ain't reading all that/i'm happy for u tho/or sorry that happened" mathematical shitposts of all time.
Gödel had great respect for their work and was considered one of only a few people at the time to have read and understood the work. He wrote an entire paper later in life explaining he wouldn’t have come to his result without Principia because it showed him a base case to work from. Him and Russell would continue to meet and discuss logic well into the 50’s.
First, you’re arguing with someone making a joke, and second, yes, he understood the axioms very well and was able to use the system to prove his incompleteness theorems, but, citation needed on whether he actually read the full three volumes? Russell himself said that “I used to know of only six people who had read the later parts of the book. Three of these were Poles... The other three were Texans [Gödel was neither Polish, nor Texan]”
[flagged]
> theorems like ∗22.92: α⊂β→α∪(β−α)
Either I misunderstand the notation or there seems to be something missing there - the right hand side of that implication arrow is not a formula.
I would assume that what is meant is α⊂β→α∪(β−α)=β
Thanks for sharing! I like to look at this example inside the debate of if mathematics are invented or discovered.
> That is how Whitehead and Russell did it in 1910. How would we do it today? A relation between S and T is defined as a subset of S × T and is therefore a set.
> A huge amount of other machinery goes away in 2006, because of the unification of relations and sets.
Relations are a very intuitive thing that I think most people would agree that are not the invention of one person. But the language to describe them and manipulate them mathematically is an invention that can have a dramatic effect on the way they are communicated.
I'd say mathematics is discovered and definitions are invented. E.g. "ordered pair" is not part of set theory, it's an invented name we give to a convenient definition of a set schema.
Even base-N representations are an invention: S() and zero are all you need, but Roman Numerals were an improvement over base-1 representations and base-N is significantly more convenient to work with.
Be careful with making assumptions from modern, formalized set theory and the naive set theory.
The axiom schema of specification is added to avoid Russell's paradox.
A set in the naive meaning is just a well-defined collection of objects.
As ordered pairs are a binary relation, foundedness or order are operation dependant, and assuming an individual set is unordered is a useful assumption.
But IMHO it is problematic from a constructivist mathematics perspective. The ambiguity of a nieve set, especially when constricting the natural numbers, which are obvious totally ordered is a challenge to overcome.
I know the Principia was focused on successor sets, so mostly avoid it, but IMHO they would have hit it when trying to define an equally operation
If you remember membership and not elements define a set:
{a,b,c}=={a,b,c,b}=={c,b,b,a}
In a computing context, there were some protocols that may have been IBM specific that required duplicate members to be adjacent.
So while the first and the third sets would be equivalent, the second wouldn't be, so order mattered.
Most actual implementations just dropped the redundant elements, vs track membership, but I was just trying to provide an actual concrete example.
IIRC the axiom schema of specification is one of those that was folded into others in modern ZFC textbooks so it is easy to miss.
I'm not sure if I completely understand your point. Is it that the definitions of ordered pairs must be done carefully when talking about constructions in Principia because of its formulation in logical predicates, e.g. care was taken when constructing sets to avoid Russell's paradox explicitly given the axioms of logic rather than Russell's paradox being excluded in ZF by the axiom schema of specification?
Or is the difficulty in introducing a canonical order for the ordered pair, or introducing well/partial-ordering in sets themselves? I guess I see an ordered pair as more of an indexical definition than an ordering definition.
As the Principia is pretty ugly and tortured at least for me let me offer:
Naive set theory. Halmos, Paul R. http://people.whitman.edu/~guichard/260/halmos__naive_set_th...
Note the first entry of "Ordered Pairs"
> What does it mean to arrange the elements of a set A in some order?
Also note how the earlier section on "Unordered Pairs" is more about building the axiom of pairing etc...to get to ordered pairs which gets to the Cartesian product, which outputs ordered pairs.
It doesn't matter if you go through Zermelo's theorem+Zorn, that states that every set can be well-ordered, or though Cartesian product's and/or AC. (Note: This is in FoL well-ordering and AC are the same, but not in SoL and HoL)
It is not that sets are expressly unordered, as a set of points in a line segment would very much have an order, but that you didn't actively arrange the elements in order to take advantage of properties that are useful to you.
Maybe I just hit mental blocks but IMHO it is important that when you make the assumption that "there exists a set." it is very important to realize that it is "unordered" because you haven't imposed one, but is not an innate property of an element of the set.
Hopefully that helps in addressing this from your original post.
> "ordered pair" is not part of set theory
While many creators of both naive and formal set theories may choose to not define (a,b) = {{a},{a,b}} explicitly, the output of the Cartesian product is the ordered pairs, so it doesn't matter, you don't have a useful set theory without them.
Mathematics is entirely founded on human invention.
When we wrote simple mathematics on the Pioneer and Voyager probes I think it was under the assumption that anyone or anything else finding them would have co-discovered enough mathematics to recognize it on the plaques. That's the sense in which I use the word "discovered" for much of mathematics. Our definitions will differ from aliens but the foundations will be translatable.
On a side note, and since you mentioned Roman Numerals in your other comment, I would say that the representation of I, II and III being different from IV is related to how the human brain processes quantities up to 4 [0].
So it is a simple example showing that the way humans process language influences the representation/definition of mathematical ideas.
[0] https://www.nature.com/articles/s41562-023-01709-3
A sentient entity could well decide to simulate the universe without developing tools to approximate it.
That was a lovely read, thank you. I particularly enjoyed the analogy between 'a poorly-written computer program' (i.e. one with a lot of duplication due to inadequate abstraction), and the importance of using the appropriate mathematical machinery to reduce the complexity/length of a proof. It brings the the Curry–Howard isomorphism to mind: https://en.wikipedia.org/wiki/Curry%E2%80%93Howard_correspon...
It's easier if you start from something closer to Peano arithmetic or Boyer-Moore theory. I used to do a lot with constructive Boyer-Moore theory and their theorem prover. It starts with
and numbers are etc. The prover really worked that way internally, as I found out when I input a theorem with numbers such as 65536 in it. I was working on proving some things about 16-bit machine arithmetic, and those big numbers pushed SRI International's DECSystem 2060 into thrashing.Here's the prover building up basic number theory, one theorem at a time.[1] This took about 45 minutes in 1981 and takes under a second now.
Constructive set theory without the usual set axioms is messy, though. The problem is equality. Informally, two sets are equal if they contain the same elements. But in a strict constructive representation, the representations have to be equal, and representations have order. So sets have to be stored sorted, which means much fiddly detail around maintaining a valid representation.
What we needed, but didn't have back then, was a concept of "objects". That is, two objects can be considered equal if they cannot be distinguished via their exported functions. I was groping around in that area back then, and had an ill-conceived idea of "forgetting", where, after you created an object and proved theorems about it, you "forgot" its private functions. Boyer and Moore didn't like that idea, and I didn't pursue it further.
Fun times.
[1] https://github.com/John-Nagle/pasv/blob/master/src/work/temp...
The Computational Beauty of Nature has a tiny Lisp implementing integers and aritmethics by hand too, by consing t's.
In the same spirit, why 2 + 2 = 4.
https://us.metamath.org/mpeuni/mmset.html#trivia
https://us.metamath.org/mpeuni/2p2e4.html
OT (amusing parody): https://youtu.be/Zh3Yz3PiXZw [00:09:06]
Oh, so the λ in lambda calculus was just a poor man's circumflex.
Unrelated, but why doesn't Hacker News have support for latex? And markdown, for that matter?
It supports https://news.ycombinator.com/formatdoc
Sure, but that's it's own not-quite-markdown thing, which is extra annoying because it's just close enough that people think it is markdown and do things like writing code blocks with ```. IMHO it'd be much better to just actually do markdown, or at least a strict subset.
One of the best things about Markdown is that it is also a great plain text format for when rendering is not available.
But I do agree that HN’s format should be a strict subset, it is so close.
Yeah but could it even be changed at this point? I'd imagine that once the ball gets rolling, changing any kind of formatting rules for a site with over a decades worth of (hundreds of thousands, tens of millions? ) of posts would be pretttty hard to get past committee
Simple solution: apply the new formatting code only to new comments, that is, comments written after the date the new formatting was supported.
I would strongly favor writing a script that went through the database and rewrites existing comments from the old to new syntax; I believe in this case that's doable. And you would want to message it ahead of time of course. But with those things done I think it'd work fine, especially because I suspect virtually anyone who's gotten used to the HN formatting codes is already familiar with real markdown so it'd be a relatively painless transition.
That's only very limited support of the most basic forms of formatting. It's the year 2024, and Hacker News can't do better? Even the blog post above, from 2006, uses a LaTeX plugin.
Finally get why they need a thousand pages to prove 1+1=2!
The issue is 1+1 has no guarantee it will be two. You look carefully you can see the first 1 is exactly the same as the second 1 !!!!
Hence put the set of all Russell that do that kind of maths and add to another Russell also do that maths. You still ended up with one Russell.
That is why go all the trouble to say no intersection and first oneness set does not overlap with the second oneness set etc etc
Qed
> The ⊢ symbol has not changed; it means that the formula to which it applies is asserted to be true. ⊃ is logical implication, and ≡ is logical equivalence.
A strange thing happened to me in mathematics. When I got to the point where these symbols started showing up (ninth grade, more or less) I did not get a thorough explanation of the symbols; they just appeared and I tried to intuit what they meant. As more symbols crept into my math, I tried to ignore them where possible. Eventually this meant that I could not continue learning math, as it became mostly all such symbols.
I got as far as a minor in math. I'm not sure how any of this this happened, but I wish I had a table of these symbols in ninth grade.
The main point of the parent article is not about 1+1=2, but about the importance of the concept of ordered pair in mathematics and about how the introduction and use of this concept has simplified the demonstrations that were much too complicated before this.
While the article is nice, I believe that the tradition entrenched in mathematics of taking sets as a primitive concept and then defining ordered pairs using sets is wrong. In my opinion, the right presentation of mathematics must start with ordered pairs as the primitive concept and then derive sequences, sets and multisets from ordered pairs.
The reason why I believe this is that there are many equivalent ways of organizing mathematics, which differ in which concepts are taken as primitive and in which propositions are taken as axioms, while the other concepts are defined based on the primitives and other propositions are demonstrated as theorems, but most of these possible organizations cannot correspond to an implementation in a physical device, like a computer.
The reason is that among the various concepts that can be chosen as primitive in a mathematical theory, some are in fact more simple and some are more complex and in a physical realization the simple have a direct hardware correspondent and the complex can be easily built from the simple, while the complex cannot be implemented directly but only as structures built from simpler components. So in the hardware of a physical device there are much more severe constraints for choosing the primitive things than in a mathematical theory that only describes the abstract properties of operations like set union, without worrying how such an operation can actually be executed in real life.
The ordered pair has a direct hardware implementation and it corresponds with the CONS cell of LISP. In a mathematical theory where the ordered pair is taken as primitive and sets are among the things defined using ordered pairs, many demonstrations correspond to how various LISP functions would be implemented. Unlike ordered pairs, sets do not have any direct hardware implementation. In any physical device, including in the human mind, sets are implemented as equivalence classes of sequences, while sequences are implemented based on ordered pairs.
The non-enumerable sets are not defined as equivalence classes of sequences and they cannot be implemented as such in a physical device but at most as something of the kind "I recognize it when I see it", e.g. by a membership predicate.
However infinite sets need extra axioms in any kind of theory and a theory of finite sets defined constructively from ordered pairs can be extended to infinite sets with appropriate additional axioms.
What definition takes up fewer components in a digital circuit is a terrible reason. The whole point of math is we can reason about the most conceptually simple idea, rather than with engineering constraints. Sets existed before circuits! And before digital the only “hardware representation” was an analog voltage, which cannot easily represent a pair.
Also it’s not even true. There is no hardware representation for the ordered pair containing the earth and the moon. You now need a bit encoding of the information.
The distinctions of infinite constructions you mention are already well understood. See “recursively enumerable set”.
Ordered pairs are trivially definable in terms of sets. It’s a distinction which does not change any of the foundational proofs and gives you no new insight. This is like arguing that bounded vs counted ranges are foundationally important. We can show they are equivalent in one paragraph and move on.
An actually new ideas will give new results.
Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
> Wait, am I crazy for thinking relations are not sets? Two sets can be coextensive without the relation have the same intension, no? Like the set of all Kings of Mars and the set of Queens of Jupiter are coextensive, but the relations are different because they have different truth conditions. Or am I misunderstanding?
No-one can stop you from using terms as you please and investigating their consequences, but, at least in modern mathematical parlance, a binary relation is the set of ordered pairs that are "related" by it. (Your relation would seem to be just a bare set, or perhaps a unary relation, not a binary relation which I think is what is often meant without default modifier.)
He is talking about the difference between intension and extension. The properties "creature with a heart" and "creature with kidneys" are different, even though they may have the same extension (if the set of creatures with a heart and the set of creatures with a kidney happen to be the same). This also applies to relations of arbitrary arity. In mathematics everything is usually treated as extensional, because all the mathematical objects, like numbers, exist "necessarily". This is not the case for other objects, where things could be the same (like the set of creatures with heart and the set of creatures with kidneys) but they aren't necessarily the same. It's possible that there is a creature with heart but without kidneys. Though even in mathematics, properties that define the same objects are often not trivially equivalent: they are necessarily equivalent, but it may take a complex proof to show that they are.
Tw there is a follow-up. Not much. But still an update : https://blog.plover.com/math/PM-translation.html
So have someone made a modern revision of Principia using the simplifications made possible by more recent development?
I often use the analogy "1+1=?" in debates with both friends and strangers, especially when discussing subjective topics like politics, religion, and geopolitical conflicts. It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
For instance, I frequently use the example "1+1=10" in binary to illustrate that, while our reasoning may seem fundamentally different, it's simply because we're starting from different premises, using distinct methods, and approaching the same problem from unique angles.
1 + 1 = Two.
One plus one equals two.
One + 0x01 ≡ 2.0
1+1=10 (in binary)
None of these are "vastly different conclusions". None of these are starting from different premises. None of these are using different reasoning. You're literally just writing it differently. Okay, so? This is a pointless distinction that doesn't even apply in a verbal debate at all. It'd be like having a philosophical debate with someone and them suddenly saying "oh yeah, but what if we were arguing in Spanish!? Wouldn't that BLOW YOUR MIND!?" No? It has absolutely nothing to do with anything. I would be annoyed at you if you tried to use this in an argument with me.
> It's a simple way to highlight how different perspectives can lead to vastly different conclusions.
But 1+1=10 and 1+1=2 are not different conclusions, they are precisely the same conclusions but with different representations.
A better example might be 9 vs 6 written on the parking floor: depending on where you're standing, you'll read the number differently (and yet one of the readings is wrong).
> (and yet one of the readings is wrong).
It may not even be a number which is written, but the hiragana の (no).
It could be Japanese beeper slang and mean Q.
Thank you, it's an interesting read, because on my own, without the explanation this will have been over my head.
The Computational Beauty of Nature shows that with Lisp.
1+1=3 (for very large values of 1)
I would say 1 + 1 = 4 for very large values of one.
You only need mid values of 1 for 1 + 1 to equal 3
And 1x1=2 according to Terrence Howard
Actually, it is a metaphor for formulating a brand new branch of mathematics that fixes the identity principle and all the problems with the square root of two. But also, it is not a metaphor because show me any physical system where an action times an action does not equal a reaction.
It's actually super easy to form a "brand new branch of mathematics". Just start with some definitions and run with them. Although you'll almost certainly end up with something inconsistent. And if you don't, it'll almost certainly be not useful. And if it is useful, it'll almost certainly turn out to be the exact same math just wearing a costume.
There are no problems with the square root of two.
> show me any physical system where an action times an action does not equal a reaction.
Show me any gazzbok where a thrushbloom minus a grimblegork does not equal a fistelblush. Haha, you can't do it, can you!? I WIN!
That is to say: you're using silly made up definitions of "action" and "times" here.
> That is to say: you're using silly made up definitions of "action" and "times" here.
I believe they’re quoting Howard’s Rogan interview, fwiw
> show me any physical system where an action times an action does not equal a reaction
Not quite sure what an action times an action is, but how about rotating a 2d shape 180 degrees? Do that twice and it's the same as not rotating it at all.
You mean two reactions. Otherwise 1x1 would be 1
Are you saying you actually buy into the Terrence Howard school of mathematics? For serious?
I know of 7 different ways to do 1+1 getting 5 different answers. I use most of them in my day to day work as a programmer. Most of the time 1+1=10 because as a programmer I work in binary.
> Most of the time 1+1=10 because as a programmer I work in binary.
Really low level embedded work? Most programming I know about effectively works in base 10 or sometimes hex.
Embedded work - not very low level, but I need to decode a lot of CAN network packets where the individual bits matter. Most of them time I use a hex representation, but that is because hex makes it really easy to figure out the binary going on underneath. Even when I'm doing normal math though it is important to remember that it is binary under it all and so overflow happens at numbers that make sense in binary terms.
Literally confusing syntax for semantics.
For extreme values 1+1 can be as high as 5.
It's between 0 and 10, and can be approximated by either depending on the context...
1+1=10 if math were invented before fingers.
Also:
١ + ٥ = ٦
一 + 一 = ニ.
“1 + 1 = 2” is only true in our imagination, according to logical deterministic rules we’ve created. But reality is, at its most fundamental level, probabilistic rather than deterministic.
Luckily, our imaginary reality of precision is close enough to the true reality of probability that it enables us to build things like computer chips (i.e., all of modern civilization). And yet, the nature of physics requires error correction for those chips. This problem becomes more obvious when working at the quantum scale, where quantum error correction remains basically unsolved.
I’m just reframing the problem of finding a grand unified theory of physics that encompasses a seemingly deterministic macro with a seemingly probabilistic micro. I say seemingly, because it seems that macro-mysteries like dark matter will have a more elegant and predictive solution once we understand how micro-probabilities create macro-effects. I suspect that the answer will be that one plus one is usually equal to two, but that under odd circumstances, are not. That’s the kind of math that will unlock new frontiers for hacking the nature of our reality.