The review distills the book's view of the difference between pure mathematics and applied mathematics. "applied" split from "pure" to meet the technical needs of the US military during WW2.
My best example of the split is https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives
Wikpedia notes that "The list of unsuccessful proposed proofs started with Euler's, published in 1740,[3] although already in 1721 Bernoulli had implicitly assumed the result with no formal justification." The split between pure (Euler) and applied(Bernoulli) is already there.
The result is hard to prove because it isn't actually true. A simple proof will apply to a counter example, so cannot be correct. A correct proof will have to use the additional hypotheses needed to block the counter examples, so cannot be simple.
Since the human life span is 70 years, I face an urgent dilemma. Do I master the technique needed to understand the proof (fun) or do I crack on and build things (satisfaction)? Pure mathematicians are planning on constructing long and intricate chains of reasoning; a small error can get amplified into a error that matters. From a contradiction one can prove anything. Applied mathematics gets applied to engineering; build a prototype and discover problems with tolerances, material impurities, and annoying edge cases in the mathematical analysis. A error will likely show up in the prototype. Pure? Applied? It is really about the ticking of the clock.
I think that the problem is that theoretical real analysis is often presented like it's nothing but a validation of things people already knew to be true -- but maybe it's not?
The example you gave concerns differentiation. Differentiation is messy in real analysis because it's messy in numerical computing. How real analysis fixes this mess parallels how numerical computing must fix the mess. How do we make differentiation - or just derivatives, perhaps - computable?
The rock-bottom condition for computability is continuity. All discontinuous functions are uncomputable. It turns out that it is sufficient, to make your theorem hold, to have the 2nd partial derivatives f_{xy} and f_{yx} be continuous. They wouldn't even be computable otherwise!
One of the proofs provided uses integration. In numerical contexts, it is integration which is considered "easy", and "differentiation" which is considered hard. This is totally backwards to symbolic calculus.
The article also mentions Distribution Theory. This is important in the theory of linear PDEs. I suspect it is implicit in the algorithmic theory as well, whether practitioners have spelled this out or not. This is a theory that makes the differentiation operator itself computable, but at the cost of making the derivatives weaker than ordinary functions. How so? On the one hand, it allows to obtain things like the Dirac delta as derivatives, but those aren't even functions. On the other hand, these objects behave like functions - let's say f(x,y) - but we can't evaluate them at points; instead, we can take their inner product with test functions, which we can use to approximate evaluation. This is important because PDE solvers may only be able to provide solutions in the weak, distribution-theoretic sense.
OK, so if we have a distribution D (less nice than the average function) and a test function T (nicer than the average function), we have ⟨D,T⟩ = c: ℂ, so ⟨D,—⟩: test fn→ℂ and ⟨—,T⟩: distribution→ℂ ?
if there are no interiors (maybe edges but no faces nor volumes) then the vertices on the diagonals are truly independent: eg QM on small scales, GR on large ones.
[I'm currently pondering how the "main diagonal" of a transition matrix provides objects, while all the off-diagonal elements are the arrows. This implies that by rotating into an eigenframe (diagonalising), we're reducing the diversion to -∞ (generalised eigenvectors have nothing to lose but their Jordan chains) and hence back in the world of classical boolean logic?]
Is this sufficient relation: rel'ns (matrices which are particularly "irrreducible"/"simple" in that they've forgotten their weights to the point where these are either identity or zero) are concrete models of abstract quantales?
EDIT: I'm afraid I'm just learning fns vs distributions (curried fns?) myself.
I wonder how quasiparticles might relate to ideals (nuclei in quantale-speak I believe)? Note that something very much like quasiparticles is how regexen turn exponential searches into polynomial...
I ought to get overly emotional (in a bittersweet way) about all this, and i almost did, but Teddy reminded me to stay ataraxic (i.e. keeping his role in formulating key management policies purely in the cortex )
thank you for that blogpost about MPB (its one small step for fuzzablekind!)
thank you for EC ... as to thermidorian reactions, I haven't read tRB yet but it's on the slush pile now (and I have an ice pick —albeit full length— for set dressing while I read).
The term "function" sadly means different things in different contexts. I feel like this whole thread is evidence of a need for reform in maths education from calculus up. I wouldn't be surprised if you understood all of this, but I'm worried about students encountering this for the first time.
Don’t know if you are a mathematician or not but mathematically speaking “function” has a definition that is valid in all mathematical contexts. Functional clearly meets the criteria to be a function since being a function is part of the definition of being a functional.
The situation is worse than I thought. The term "function", as used in foundations of mathematics, includes functionals as a special case. By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict one another.
By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict each other.
This is incorrect. In mathematics there is a single definition of function. There is no conflict or contradiction. In all cases a function is a subset of the cross product of two spaces that satisfies a certain condition.
What changes from subject to subject is what the underlying spaces of interest are.
> What changes from subject to subject is what the underlying spaces of interest are.
I'm not sure I understand what you mean here. I need some clarification. How does this have any bearing on whether functionals count as functions or not? What is the "underlying spaces of interest" in this example?
In some trivial way, every mathematical object can be seen as a function. You can replace sets in axiomatic set theory with functions.
Everything I wrote was assuming set theory as the foundations for mathematics and applies only to that setup. At any rate a functional is function since the definition starts with: a functional is a function from…
Some books will say: a functional is a linear map….
You genuinely don't know what you're talking about. The word "function" means different things in different areas. So does the word "map" or "mapping". In analysis, what you personally call a "function" instead falls under the term "mapping". In foundations - which is a different area with incompatible terminology - the terms "mapping" and "function" are defined to mean the same thing.
This situation is a consequence of how mathematicians haven't always been sure how to define certain concepts. See "generating function" for yet another usage of the word "function" that's in direct contradiction with the last two. So that's three incompatible usages of the term "function". All this terminology goes back to the 1700s when mathematics was done without the rigour it has today.
I find it aggravating how you're so confidently wrong. I hope it's not on purpose.
I am looking at the whole development of this thread with amusement, but I also find it somewhat shocking.
I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other. If you are writing a book about analysis, it might be helpful to clarify that in this context you reserve "function" for mappings into ℂ or ℝ, for example [1] defines "function" exclusively as a mapping from a set S to ℝ (without any further requirements on S such as being a subset of ℝⁿ). Note that even under this restricted definition of function, a distribution still is a function.
In a general mathematical context, "function" and "mapping" are usually used synonymously. It is just not the case that such use is restricted to "foundations" only.
It seems to me that squabbles about issues like this are becoming more frequent here on HN, and I am wondering why that is. One hypothesis I have is that there is an influx of people here who learn mathematics through the lens of programs and type theory, and that limits their exposure to "normal" mathematics.
[1] Undergraduate Analysis, Second Edition, by Serge Lang
I learned mathematics the regular way. So you're wrong - and not just about this.
> I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other
They literally are different. The proof is all the people here saying that distributions aren't functions, while displaying a clear understanding of what a distribution is. Maybe no one's "wrong" as such, if they're defining the same word differently.
I think you're the naive one here. Terminology is used inconsistently, and I tried to simplify the dividing line between different uses of it. I agree it's inaccurate to say it's decided primarily by Foundations vs Analysis, but I'm not sure how else to slice the pie. It's like how the same word can mean slightly different things in French and English. I agree it's quibbling, but it's harder to teach maths to people if these False Friends exist but don't get pointed out.
I never expected some obsessive user to make 6 different replies to one of my comments. Wow. This whole thing thread was a bit silly, and someone's probably going to laugh at it. I need to take another break from this site.
I never expected some obsessive user to make 6 different replies to one of my comments. Wow.
You have 6 posts in the thread started by my top comment. I had multiple replies to one of your posts because HN requires one to wait a while to reply and I was in a hurry. The order of posts doesn’t matter. At least not to me.
Insinuating I’m obsessive has a negative connotation. Along with outright insults such comments make you look bad and unreasonable.
It’s strange to hear a fellow mathematician say that if I’m in set theory class then a functional is a function but isn’t one in functional analysis. In Rudin’s Functional Analysis book he proves that linear mappings between topological spaces are continuous if they are continuous at 0. I’ve never heard of someone believing that a continuous mapping is not a function.
Terry Tao writes in his analysis book:
Functions are also referred to as maps or transformations, depending on the context.
Tao certainly knows more about this than I ever will.
Yeah, the whole argument felt somewhat unhinged and silly. It is fine to point out that sometimes "function" is used in a more specific manner than "mapping", particularly in analysis, but I doubt any mathematician would think that a functional is not a function, in a general context such as a HN comment.
You genuinely don't know what you're talking about. .... I find it aggravating how you're so confidently wrong.
This is a fine example of irony.
Let V be a vector space over the reals and L a functional. Let v be a particular element of V. L(v) is a real number. It is a single value. L(v) can't be 1.2 and also 3.4. Thus L is a function.
A function is simply a subset of the product of two sets with the property that if (a,b) and (a, c) are in this subset then b=c.
Can you find a functional that does not meet this criterion? If so then you have an object such that L maps v to a and also maps v to c with a and c being different elements.
Find me a linear map that does not meet the definition of function. Give an example of a functional in which the functional takes a given input to more than one element of the target set.
I think you are not a mathematician and you also don't appear to understand that a word can have different meanings based on context. "generating function" isn't the same thing as "function". Notice that generating is paired with function in the first phrase.
Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!
> Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!
You have a problem with reading comprehension. I never said any mathematician was wrong.
Think about namespaces for a moment, like in programming. There are two namespaces here: The analysis namespace and the foundations namespace.
In either of those two namespaces, the word "mapping" means what you're describing: an arbitrary subset F of A×B for which every element of a ∈ A occurs as the first component in a unique element (x,y) ∈ F.
But the term "function" has a different meaning in each of the two namespaces.
The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.
The word "function" in the foundations namespace defines it to be any mapping whatsoever.
If one has a “thing” that “maps” elements of one set to another that satisfies the condition I previously gave then that thing is a function. Every functional satisfies that definition. Therefore every functional is a function.
[edit] I've finally blown it. You're a moron. Your definition of "function" as some subset of AxB is how it's defined in foundations. It's not how it's defined in analysis. In analysis, your definition would describe the term "mapping". What a crackpot and idiot. I'm done wasting time and sanity on this.
Interesting. So you think there are functions in real analysis that are studied that don't meet the definition I gave? Is there a functional that does not meet the definition I gave?
In all contexts a function is a subset of the product of two sets that meets a certain condition. Anything that does not meet this definition is not called a function.
Every functional meets the definition of function.
The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.
In real analyis one is interested in functions from R^n to R. They don't define function to be only something from R^n to R. It's just that these are the functions they wish to study. They don't define function to exclusively be a map from R^n to R. It’s just that these are the types of functions they care about.
No mathematician can possibly think function is anything other than a subset of the product of two spaces that meets a certain condition.
I’m not sure if I am mathematically sophisticated enough to follow along but I’ll try. This chain of thought reminds me of the present state of cryptography, which is built on unproven assumptions about the computational hardness of certain problems. Meanwhile Satoshi Nakamoto hacks together some of these cryptographic components into a novel system for decentralized payments with a few hand-wavy arguments about why it will work and it grows into a $1+ trillion asset class.
The innovation on Bitcoin is not about cryptography but game-theory at work. For example, is it convenient for a miner to destroy the system or to continue mining? There are theoretical attacks at around 20%, not 51%. A state actor could also attack the system if they want to invest enough resources.
I took a look at the book a while ago, and I like how it treats abstraction as its guiding theme. For my project Practal (https://practal.com), I've recently pivoted to a new tagline which now includes "Designing with Abstractions". And I think that points to how to resolve the dilemma you point out between pure and applied: we soon will not have to decide between pure and applied mathematics. Designing (≈ applied math) will be done the most efficient way by finding and using the right abstractions (≈ pure math).
The chains of reasoning are only long and intricate if you trace each result back to axiomatics. Most meaningful results are made up of a handful of higher-level building blocks -- similar to how software is crafted out of modules rather than implementing low-level functionality from scratch each time (yes, similar but also quite different)
That's a fantastic essay - I feel like it's the tip of a rich vein that I'm looking forward to exploring. Thanks for drawing it to my attention. I can't wait to get on to studying abstract algebra and categories properly for myself which is probably about a year off at this point.
If you want to study categories from a relatively foundational point of view, the author, McLarty, also has a very readable book called Elementary Categories, Elementary Toposes.
Here's Cornelius Lanczos in 1972 on how the "pure math" and "applied math" split was not a thing until the beginning of the 20th century: https://www.youtube.com/watch?v=avSHHi9QCjA
Hey guys, I’m honestly not sure how to explain this—I’m not a mathematician but a culture and media scholar to whom talking with AI comes quite naturally. I worked on this for past 2 months 12-14 hours a day as it began to develop into something unique… a sketch for maths without infinity (in any sense of the term). AIs claim it’s legit. A few friends with phds in maths and physics claim that… its mind-boggling but they can’t find serious flaws in it. It all started as a philosophical deep-dive with AI on civilization’s “programs” and somehow evolved into revisiting Pascal’s probability, but with a twist from thermodynamics. Then it spiraled into what I can only call Void Theory—a framework that feels almost surreal and dogmatically realistic in its approach to math as a system that exists in a material world.Due to its posthuman origins it would take ages to spread traditional way and I think it would be a waste of time. I can promise you that - at least as a kind of experiment - it’s fascinating and, maybe, can be something quite big. Be so kind and give it a chance… https://drive.google.com/drive/folders/1dBSWahEz_9kbyK-PGXxZ...
The review distills the book's view of the difference between pure mathematics and applied mathematics. "applied" split from "pure" to meet the technical needs of the US military during WW2.
My best example of the split is https://en.wikipedia.org/wiki/Symmetry_of_second_derivatives Wikpedia notes that "The list of unsuccessful proposed proofs started with Euler's, published in 1740,[3] although already in 1721 Bernoulli had implicitly assumed the result with no formal justification." The split between pure (Euler) and applied(Bernoulli) is already there.
The result is hard to prove because it isn't actually true. A simple proof will apply to a counter example, so cannot be correct. A correct proof will have to use the additional hypotheses needed to block the counter examples, so cannot be simple.
Since the human life span is 70 years, I face an urgent dilemma. Do I master the technique needed to understand the proof (fun) or do I crack on and build things (satisfaction)? Pure mathematicians are planning on constructing long and intricate chains of reasoning; a small error can get amplified into a error that matters. From a contradiction one can prove anything. Applied mathematics gets applied to engineering; build a prototype and discover problems with tolerances, material impurities, and annoying edge cases in the mathematical analysis. A error will likely show up in the prototype. Pure? Applied? It is really about the ticking of the clock.
I think that the problem is that theoretical real analysis is often presented like it's nothing but a validation of things people already knew to be true -- but maybe it's not?
The example you gave concerns differentiation. Differentiation is messy in real analysis because it's messy in numerical computing. How real analysis fixes this mess parallels how numerical computing must fix the mess. How do we make differentiation - or just derivatives, perhaps - computable?
The rock-bottom condition for computability is continuity. All discontinuous functions are uncomputable. It turns out that it is sufficient, to make your theorem hold, to have the 2nd partial derivatives f_{xy} and f_{yx} be continuous. They wouldn't even be computable otherwise!
One of the proofs provided uses integration. In numerical contexts, it is integration which is considered "easy", and "differentiation" which is considered hard. This is totally backwards to symbolic calculus.
The article also mentions Distribution Theory. This is important in the theory of linear PDEs. I suspect it is implicit in the algorithmic theory as well, whether practitioners have spelled this out or not. This is a theory that makes the differentiation operator itself computable, but at the cost of making the derivatives weaker than ordinary functions. How so? On the one hand, it allows to obtain things like the Dirac delta as derivatives, but those aren't even functions. On the other hand, these objects behave like functions - let's say f(x,y) - but we can't evaluate them at points; instead, we can take their inner product with test functions, which we can use to approximate evaluation. This is important because PDE solvers may only be able to provide solutions in the weak, distribution-theoretic sense.
Do I understand properly that in a different universe distributions could have been called prefunctions?
A distribution is a function, on the space of test functions.
OK, so if we have a distribution D (less nice than the average function) and a test function T (nicer than the average function), we have ⟨D,T⟩ = c: ℂ, so ⟨D,—⟩: test fn→ℂ and ⟨—,T⟩: distribution→ℂ ?
Wait i thought functions are predistributions..
[My bad, it was Matvei, not Manuel, no idea how i mixed that up..
Checkout his childrens books, as well as
https://archive.is/eaYRs
Note how the independent diagonals are what i consider interesting]
if there are no interiors (maybe edges but no faces nor volumes) then the vertices on the diagonals are truly independent: eg QM on small scales, GR on large ones.
[I'm currently pondering how the "main diagonal" of a transition matrix provides objects, while all the off-diagonal elements are the arrows. This implies that by rotating into an eigenframe (diagonalising), we're reducing the diversion to -∞ (generalised eigenvectors have nothing to lose but their Jordan chains) and hence back in the world of classical boolean logic?]
EDIT: https://mmozgovoy.dev/posts/solar-matter/
[Righhht, maybe you can excite me even more by relating this to quantales?? Or maybe expand on fns vs distributions a bit more?]
L: quantal (quasiparticles)
Is this sufficient relation: rel'ns (matrices which are particularly "irrreducible"/"simple" in that they've forgotten their weights to the point where these are either identity or zero) are concrete models of abstract quantales?
Lagniappe: https://www.sciencedirect.com/science/article/pii/0022404993...
EDIT: I'm afraid I'm just learning fns vs distributions (curried fns?) myself.
I wonder how quasiparticles might relate to ideals (nuclei in quantale-speak I believe)? Note that something very much like quasiparticles is how regexen turn exponential searches into polynomial...
REDIT(s)
I ought to get overly emotional (in a bittersweet way) about all this, and i almost did, but Teddy reminded me to stay ataraxic (i.e. keeping his role in formulating key management policies purely in the cortex )
thank you for that blogpost about MPB (its one small step for fuzzablekind!)
[as well as the nuclei hint, more tk]
thank you for EC ... as to thermidorian reactions, I haven't read tRB yet but it's on the slush pile now (and I have an ice pick —albeit full length— for set dressing while I read).
oops: Eispickel => ice axe, not ice pick
A distribution is not a function. It is a continuous linear functional on a space of functions.
Functions define distributions, but not all distributions are defined that way, like the Dirac delta or integration over a subset.
A functional is a function.
The term "function" sadly means different things in different contexts. I feel like this whole thread is evidence of a need for reform in maths education from calculus up. I wouldn't be surprised if you understood all of this, but I'm worried about students encountering this for the first time.
Don’t know if you are a mathematician or not but mathematically speaking “function” has a definition that is valid in all mathematical contexts. Functional clearly meets the criteria to be a function since being a function is part of the definition of being a functional.
The situation is worse than I thought. The term "function", as used in foundations of mathematics, includes functionals as a special case. By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict one another.
By contrast, the term "function", as used in mathematical analysis, explicitly excludes functionals. The two definitions of the word "function" are both common, and directly contradict each other.
This is incorrect. In mathematics there is a single definition of function. There is no conflict or contradiction. In all cases a function is a subset of the cross product of two spaces that satisfies a certain condition.
What changes from subject to subject is what the underlying spaces of interest are.
> What changes from subject to subject is what the underlying spaces of interest are.
I'm not sure I understand what you mean here. I need some clarification. How does this have any bearing on whether functionals count as functions or not? What is the "underlying spaces of interest" in this example?
In some trivial way, every mathematical object can be seen as a function. You can replace sets in axiomatic set theory with functions.
Everything I wrote was assuming set theory as the foundations for mathematics and applies only to that setup. At any rate a functional is function since the definition starts with: a functional is a function from…
Some books will say: a functional is a linear map….
Note that a linear map is a function.
You genuinely don't know what you're talking about. The word "function" means different things in different areas. So does the word "map" or "mapping". In analysis, what you personally call a "function" instead falls under the term "mapping". In foundations - which is a different area with incompatible terminology - the terms "mapping" and "function" are defined to mean the same thing.
This situation is a consequence of how mathematicians haven't always been sure how to define certain concepts. See "generating function" for yet another usage of the word "function" that's in direct contradiction with the last two. So that's three incompatible usages of the term "function". All this terminology goes back to the 1700s when mathematics was done without the rigour it has today.
I find it aggravating how you're so confidently wrong. I hope it's not on purpose.
[edit] [edit 2: Removed insults]
I am looking at the whole development of this thread with amusement, but I also find it somewhat shocking.
I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other. If you are writing a book about analysis, it might be helpful to clarify that in this context you reserve "function" for mappings into ℂ or ℝ, for example [1] defines "function" exclusively as a mapping from a set S to ℝ (without any further requirements on S such as being a subset of ℝⁿ). Note that even under this restricted definition of function, a distribution still is a function.
In a general mathematical context, "function" and "mapping" are usually used synonymously. It is just not the case that such use is restricted to "foundations" only.
It seems to me that squabbles about issues like this are becoming more frequent here on HN, and I am wondering why that is. One hypothesis I have is that there is an influx of people here who learn mathematics through the lens of programs and type theory, and that limits their exposure to "normal" mathematics.
[1] Undergraduate Analysis, Second Edition, by Serge Lang
I learned mathematics the regular way. So you're wrong - and not just about this.
> I see that you are desperately trying to distinguish "foundational" and "analysis" contexts from each other
They literally are different. The proof is all the people here saying that distributions aren't functions, while displaying a clear understanding of what a distribution is. Maybe no one's "wrong" as such, if they're defining the same word differently.
I think you're the naive one here. Terminology is used inconsistently, and I tried to simplify the dividing line between different uses of it. I agree it's inaccurate to say it's decided primarily by Foundations vs Analysis, but I'm not sure how else to slice the pie. It's like how the same word can mean slightly different things in French and English. I agree it's quibbling, but it's harder to teach maths to people if these False Friends exist but don't get pointed out.
I never expected some obsessive user to make 6 different replies to one of my comments. Wow. This whole thing thread was a bit silly, and someone's probably going to laugh at it. I need to take another break from this site.
I never expected some obsessive user to make 6 different replies to one of my comments. Wow.
You have 6 posts in the thread started by my top comment. I had multiple replies to one of your posts because HN requires one to wait a while to reply and I was in a hurry. The order of posts doesn’t matter. At least not to me.
Insinuating I’m obsessive has a negative connotation. Along with outright insults such comments make you look bad and unreasonable.
Terry Tao in one of his analysis books writes:
Functions are also referred to as maps or transformations, de- pending on the context.
This after defining a function in essentially the same I did.
Just to make clear, so you are saying Serge Lang is wrong, too? And as proof you cite various anonymous HN users, most of them heavily downvoted?
> I agree it's inaccurate to say it's decided primarily by Foundations vs Analysis, but I'm not sure how else to slice the pie.
Seems you agree with me after all.
> I agree it's quibbling, but it's harder to teach maths to people if these False Friends exist but don't get pointed out.
A distribution is a function, but considered on a different space.
It is even harder to teach math to people by insisting that above fact is wrong. Schwartz got a Fields medal for this insight.
It’s strange to hear a fellow mathematician say that if I’m in set theory class then a functional is a function but isn’t one in functional analysis. In Rudin’s Functional Analysis book he proves that linear mappings between topological spaces are continuous if they are continuous at 0. I’ve never heard of someone believing that a continuous mapping is not a function.
Terry Tao writes in his analysis book:
Functions are also referred to as maps or transformations, depending on the context.
Tao certainly knows more about this than I ever will.
Yeah, the whole argument felt somewhat unhinged and silly. It is fine to point out that sometimes "function" is used in a more specific manner than "mapping", particularly in analysis, but I doubt any mathematician would think that a functional is not a function, in a general context such as a HN comment.
You genuinely don't know what you're talking about. .... I find it aggravating how you're so confidently wrong.
This is a fine example of irony.
Let V be a vector space over the reals and L a functional. Let v be a particular element of V. L(v) is a real number. It is a single value. L(v) can't be 1.2 and also 3.4. Thus L is a function.
A function is simply a subset of the product of two sets with the property that if (a,b) and (a, c) are in this subset then b=c.
Can you find a functional that does not meet this criterion? If so then you have an object such that L maps v to a and also maps v to c with a and c being different elements.
Find me a linear map that does not meet the definition of function. Give an example of a functional in which the functional takes a given input to more than one element of the target set.
I think you are not a mathematician and you also don't appear to understand that a word can have different meanings based on context. "generating function" isn't the same thing as "function". Notice that generating is paired with function in the first phrase.
Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!
I'll try one last time.
> I think you are not a mathematician
Guess again.
> Example: Jellyfish is not a jelly and not a fish. Biologists have got it all wrong!
You have a problem with reading comprehension. I never said any mathematician was wrong.
Think about namespaces for a moment, like in programming. There are two namespaces here: The analysis namespace and the foundations namespace.
In either of those two namespaces, the word "mapping" means what you're describing: an arbitrary subset F of A×B for which every element of a ∈ A occurs as the first component in a unique element (x,y) ∈ F.
But the term "function" has a different meaning in each of the two namespaces.
The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.
The word "function" in the foundations namespace defines it to be any mapping whatsoever.
Hopefully, now you'll get it.
If one has a “thing” that “maps” elements of one set to another that satisfies the condition I previously gave then that thing is a function. Every functional satisfies that definition. Therefore every functional is a function.
[edit] I've finally blown it. You're a moron. Your definition of "function" as some subset of AxB is how it's defined in foundations. It's not how it's defined in analysis. In analysis, your definition would describe the term "mapping". What a crackpot and idiot. I'm done wasting time and sanity on this.
Interesting. So you think there are functions in real analysis that are studied that don't meet the definition I gave? Is there a functional that does not meet the definition I gave?
In all contexts a function is a subset of the product of two sets that meets a certain condition. Anything that does not meet this definition is not called a function.
Every functional meets the definition of function.
The word "function" in the analysis namespace defines it to ONLY EVER be a mapping S -> R or S -> C, where S is a subset of C^n or R^n. The word "function" is not allowed to be used - within this namespace - to denote anything else.
In real analyis one is interested in functions from R^n to R. They don't define function to be only something from R^n to R. It's just that these are the functions they wish to study. They don't define function to exclusively be a map from R^n to R. It’s just that these are the types of functions they care about.
No mathematician can possibly think function is anything other than a subset of the product of two spaces that meets a certain condition.
In general, instead of resorting to name calling it's best to just walk away. It makes you look bad and unreasonable.
Try composing two distributions.
Try composing f : A -> B with g : A -> B, for A ≠ B. Still, f and g are functions. So, what exactly is your point?
I’m not sure if I am mathematically sophisticated enough to follow along but I’ll try. This chain of thought reminds me of the present state of cryptography, which is built on unproven assumptions about the computational hardness of certain problems. Meanwhile Satoshi Nakamoto hacks together some of these cryptographic components into a novel system for decentralized payments with a few hand-wavy arguments about why it will work and it grows into a $1+ trillion asset class.
The innovation on Bitcoin is not about cryptography but game-theory at work. For example, is it convenient for a miner to destroy the system or to continue mining? There are theoretical attacks at around 20%, not 51%. A state actor could also attack the system if they want to invest enough resources.
Genuinely curious since I’d only heard of the “51% attack” — what happens around 20%?
Please check "Selfish Mining: A 25% Attack Against the Bitcoin Network" [1] and scientific studies such as [2].
[1] https://www.reddit.com/r/Bitcoin/comments/1pv8ty/selfish_min...
[2] https://arxiv.org/pdf/1507.06183
yes the cool thing about tech is that you don't have to know why it will work or even how, just so long as it does.
I took a look at the book a while ago, and I like how it treats abstraction as its guiding theme. For my project Practal (https://practal.com), I've recently pivoted to a new tagline which now includes "Designing with Abstractions". And I think that points to how to resolve the dilemma you point out between pure and applied: we soon will not have to decide between pure and applied mathematics. Designing (≈ applied math) will be done the most efficient way by finding and using the right abstractions (≈ pure math).
The chains of reasoning are only long and intricate if you trace each result back to axiomatics. Most meaningful results are made up of a handful of higher-level building blocks -- similar to how software is crafted out of modules rather than implementing low-level functionality from scratch each time (yes, similar but also quite different)
https://www.landsburg.com/grothendieck/mclarty1.pdf
That's a fantastic essay - I feel like it's the tip of a rich vein that I'm looking forward to exploring. Thanks for drawing it to my attention. I can't wait to get on to studying abstract algebra and categories properly for myself which is probably about a year off at this point.
If you want to study categories from a relatively foundational point of view, the author, McLarty, also has a very readable book called Elementary Categories, Elementary Toposes.
Sounds perfect. Thank you very much for the tip.
Literally the same:
A type is a theorem and its implementation a proof, if you believe that Curry-Howard stuff.
We “prove” (implement) advanced “theorems” (types) using already “proven” (implemented) bodies of work rather than return to “axioms” (machine code).
No, it is not the same, CH is just a particular instance of it, much like "shape" is not the same thing as "triangle".
Here's Cornelius Lanczos in 1972 on how the "pure math" and "applied math" split was not a thing until the beginning of the 20th century: https://www.youtube.com/watch?v=avSHHi9QCjA
Hey guys, I’m honestly not sure how to explain this—I’m not a mathematician but a culture and media scholar to whom talking with AI comes quite naturally. I worked on this for past 2 months 12-14 hours a day as it began to develop into something unique… a sketch for maths without infinity (in any sense of the term). AIs claim it’s legit. A few friends with phds in maths and physics claim that… its mind-boggling but they can’t find serious flaws in it. It all started as a philosophical deep-dive with AI on civilization’s “programs” and somehow evolved into revisiting Pascal’s probability, but with a twist from thermodynamics. Then it spiraled into what I can only call Void Theory—a framework that feels almost surreal and dogmatically realistic in its approach to math as a system that exists in a material world.Due to its posthuman origins it would take ages to spread traditional way and I think it would be a waste of time. I can promise you that - at least as a kind of experiment - it’s fascinating and, maybe, can be something quite big. Be so kind and give it a chance… https://drive.google.com/drive/folders/1dBSWahEz_9kbyK-PGXxZ...
[flagged]
Pretty cool stuff