This is such a great page, i love it and came across it multiple times while studying the matter.
I still would vote to learn it from Milewski though. Learning this is a journey and the author of ct-illustrated is, i think, still in the middle of it.
Milewski has made that trip multiple times already. The book and his blog are great places.
> I still would vote to learn it from Milewski though
I've read the first dozen chapters of Milewski so far, and while I really enjoyed the first couple chapters, his style of not giving precise definitions or statements, nor using precise notation becomes really annoying after a while and makes the book practically useless as a reference. He seems to think everything is easier to understand when it's written in lighthearted, imprecise prose.
You might be looking for a kind of understanding that doesn't exist in math. John von Neumann said, "In mathematics, you don't understand things. You just get used to them." So even one of the greatest mathematicians of all time didn't feel like he "understood" concepts that he was adept at applying to both abstract and real-world problems.
This is why there are monad tutorials that use dozens of different analogies for monads, and nobody feels like they understand anything better after reading them. But if you use monads enough, eventually you get used to how they work and how to accomplish things with them. And then you stop caring if you "understand" them.
If it is intuitive understanding you’re talking about then I feel that there are things in math that are more intuitive and less intuitive and harder to grasp.
Speak for yourself, monads are easily understood from the philosophical pov: https://en.wikipedia.org/wiki/Monad_(philosophy) which they are derived from in category theory, so maybe go that way if you still need foundation for the concept of the concept that bore all concepts?
Category theory does not derive the idea of a monad from philosophy, it merely reuses the word for an entirely unrelated purpose. It's like how a frog [1] to a farrier has nothing to do with the frog [2] from everyday use, nor to the frog [3] from railroading.
Easily understood? Looking at that link, the term has been used by many different philosophers with many different meanings. Leibniz's monads are the most famous, and they have nothing in common with category theory monads except the name.[0]
The mathematical concept was in use for years before anybody decided to call it a "monad" anyway. According to Wikipedia: "The notion of monad was invented by Roger Godement in 1958 under the name 'standard construction'. Monad has been called 'dual standard construction', 'triple', 'monoid' and 'triad'. The term 'monad' is used at latest 1967, by Jean Bénabou."[1]
So what? If you call it god around 0 BC and monad in the 1970 it's still the same concept. If the notation and meanings change in your pov and that gives you headaches, that's maybe related to the fact, that you cannot call an absurd function, but you can derive structure from combining it.
A monad is an entity that has no purpose other than relation to itself, but can be part of a structure that employs functionality or meaning. I think the problems around this concept for most is, that causality is not a requirement yet, so things don't have to make sense to be part of constructing upon them.
An analogy in IT might be, that worrying about traversal of a graph is a problem domain in time and space and causality that involves questions around loop detection and recursion because you deal from a pov that has those constraints.
But that's not a problem yet in the domain of structure as it.
This took some brain, so i hope it's a good try on explanation.
Its the point where you start from in CT. Having the problem of not being able to talk about a concept is the absurd function and part of Chapter 1 (!!) in books about category theory. So instead of circling around this topic, welcome, this means you are inside the problem domain of category theory ;)
Yeah, but that's kinda the point i made in my original post: You can work towards the general math terms while translating your domain model into higher abstractions, but this will have consequences on your personal life. You can see this clearly in the ct-illustrated author that interweaves his struggle while learning the matter and wrote a journal about both his life and the topic.
As Milewski teaches the subject, he is in a position to match both personal life, work and the topic into one book, so this is the goal, but it will entail you leaving the narrow perspective of your work. He can talk from math to math, but you have to - similar to the author of ct-illustrated - find your own way from translating your specific model into a more general one.
If you want a handle to start: Think of a line that extends into + and - infinity: It will create a circle. Even though a circle is a 2 dimensional object, it's made of infinetly many distinct 1-dimensional lines. The same is true for your work: Even though it is made out of a distinct countable number of interwoven objects in your domain, this structure is part of a bigger abstraction. Doing more work, like drawing more distinct new objects will not take you onto the more general abstraction, but thinking about what all your domain models have in common.
Try to work less, be lazy and the underlying mathematical models will appear in a language you already speak. In your case: Abstraction your work domain model into multiple clients (1 employer --> multiple employers in a similar line of work) will a) lead into your termination b) show how you can abstract your work away from 1 source of income and purpose into multiple ones.
A circle is a 1-dimensional manifold (which, indeed, could be embedded in a space that has more dimensions). This follows exactly from your own (correct) image of it as being created from (i.e. being locally homeomorphic to) straight lines. But the entire thing ("the territory") always turns out to be more than the set of parts it is "built from," and if you ignore this, not only you are bound to lose the forest for the trees, your understanding of the whole may end up being completely wrong.
Would you like to elaborate more on the territory part? As far as i understood you talk about emergence of properties beyond the sum of the elements but might be mistaken. Im fine with links to libgen.is or lengthy pdfs if necessary.
I'm currently working on trees for a nested controller approach and constructing / deconstructing differing branches. So even though i don't feel stuck right now i'm always willing to take hints for taking another path.
Sorry, the word "territory" was merely an (unsuccessful) attempt at a pun on the notion of "atlas" used in the theory of manifolds. (In your example, the straight line segments are the "pages" in such atlas, and the circle - the manifold - is the "territory.")
It would seem to me that the properties of the circle are not emergent properties of a bunch of line segments; rather, the circle is something else entirely, whose shape is forced upon it by its own nature. Another example that comes to mind is the so-called "fiber bundles," whose own topology in general can be much more complex than that of a "trivial bundle" (simply the product of two spaces - the "base" and the "fiber"), although locally they all do look "trivial."
None of this, I am afraid, has much to do with what you said you are working on. In the realm of software development, the only emerging properties (or non-trivial topologies) are bugs.
Nah, that's totally fine, i for my part tried to make a joke about tree structure in a frontend/backend router. As there are multiple branches (subtrees) derived from different (single linked list) paths it's not a problem to go back to a node and diverge, it does not invalidate the tree. Im sure a lot of the joke in each part is lost in niche knowledge and translation.
As for manifolds, i am drawn to stuff like a hendecachoron and we have a new fountain in Chemnitz now called "Manifold" which is made from circles and i love it! Also the local architect of the town hall has the surname "Möbius".
So i'll take the hint to learn a bit more about topology!
He can abstract more models in general frameworks (This is true for every job). He just is hesitant taking the leap to go all Camus and derive money from absurdism ;)
Starting from the beginning of the book, I came across this gem of a sentence, when speaking about math compared to science or engineering:
> Because of this, mathematicians are in a weird and, I’d say, unique position of always having to defend what they do with respect to its value for other disciplines. I again stress that this is something that would be considered absurd when it comes to any other discipline.
I think this is a concept that anyone who studied anything that does not directly lead to monetizable outcomes can relate to, but it's nice to hear even those whose gifts skew to the numeric also have to contend with "Milton Friedman's Razor".
Its a good thing, then, that so many projects in "cultural studies" are actually funded directly by the DoD and the State Department. Hell, the entirety of "post-colonial" studies today is nothing more than the backend of US soft-power (and, presumably, if a war breaks out, hard power as well).
Any body can share a success story of using category theory gainfully to any CS/SWE problem that couldn't have been solved without? No Monads isn't one, you would invent it naturally when the situation calls for it. I spent a year studying in grad school and I ultimately abandoned it.
There are no problems that can't be modeled without category theory. One of the most foundational category theorems is the Yoneda Lemma, which directly states that any problem phrased in the language of categories can be translated to the language of sets and functions. The same is true of every mathematical object with a definition in terms of sets - you could always replace the name with its definition.
The contribution of category theoretic language to the implicit framework of a theory can't be larger than the definition of a category, which is very small. You could be asking "why use groups when sets with an associative operation exhibiting closure, an identity and an inverse are more approachable?" Abstract algebra is based on a library of definitions that refer to types of operations on sets that are simple enough to be common. A tool or a technique are not the kind of things you can find in a definition.
Rings, vector spaces and modules get a sort of instant acceptance for what they are, but categories have believers and disbelievers. I am curious about how that can happen.
It is because without vector spaces, you could not do linear algebra, which you need for everything. Without categories, you cannot do category theory, which you need for ... what exactly?
Of course you can do linear algebra without vector spaces. Leibniz didn't know about vector spaces, yet he was doing linear algebra. It just happens that the use of vector spaces massively helps thinking about linear algebra problems.
CT is applied to many domains. For a concrete example look up ZX calculus, which is used to optimize quantum circuits.
> It just happens that the use of vector spaces massively helps thinking about linear algebra problems.
That is the point here.
> CT is applied to many domains.
Yes, but in which of those does it massively help? I just looked up ZX calculus, and I am sure you can formulate that better by not mentioning CT at all.
As usual, quick googling reveals that the ZX calculus is really just a CT repackaging of Penrose diagrams, which were discovered and used without any involvement of CT.
The closest I know of is the work on UMAP. I interviewed Leland McInnes who explained to me in detail how category theory was a big part of helping him connect the dots, even though the final result does not strictly need it in the actual code. Given the relative improvement over the previous state of the art (t-SNE), it's the only example that really makes me reconsider my poo-pooing the way category theory is discussed in software.
I felt UMAP and its limitations were explained in a friendly way here https://topos.site/blog/2024-04-05-understanding-umap/ - written by someone who understands the category theory but is able to explain it without using any category theory.
"Anybody can share a sucess story of using a car to go places I couldn't have gone on foot"?
CT is a language and a tool, meaning anything you can say in the language of CT you can say in other languages.
Like cars, if you learn how to drive it (and this one has a very steep learning curve), it gets you places faster, but you there's in principle nothing stopping you from going there walking, i.e. without specifically referring to any CT concepts.
At the Topos Institute we're working on some new software that we're hoping will be a lot more transparent to people who haven't drunk the CT Kool-Aid yet; the current pre-alpha is mainly for systems dynamics modeling but the category-theoretic basis is, to my mind, indispensable for the range of tasks we're targeting. I'd be very happy to hear anybody's thoughts! https://topos.site/blog/2024-10-02-introducing-catcolab/
Reformulating something you already understanding in a more general framework can give more insight into what it really means, isolate the essence from messy details. From my very limited understanding of it, characterising an object with universal properties is an important part of category theory.
Another practical utility of category theory is providing a common language for computer scientists, mathematicians and physicists to speak. You can imagine collaboration is not easy when everyone calls the same pattern with different names with slightly incompatible definitions that requires you to understand unfamiliar theories.
> is providing a common language for computer scientists, mathematicians and physicists to speak
The cat theory framework is too high level to usefully exchange ideas between these fields. The consensus in academia seems to be that it is a nice "party trick" framework that has very limited insights or expressiveness in actual physics/CS problems.
Historically, Category Theory was developed to formalize and better understand some deep methods used in mathematics. Like much of mathematics, it "automated" some types of reasoning, opening possibilities that did not (practically) exist before. There are some areas today that cannot even be properly understood without thinking in categorical terms.
I would like to second this take, as someone who found it very useful while writing up my masters (in logic). When you're working with objects that satisfy lots of universal laws (particularly algebras, logics, stuff like that) CT can be used to remove tons of "boilerplate" in your proofs for things that are basically trivial/routine but a pain to state and prove formally. As an example, imagine a construction where you extend the underlying language of some logic (like adding additional constant symbols in, for instance). Strictly speaking, to "transfer" results into this new logic you need to do a bunch of tedious proof to show that everything goes through the way you would expect. Category theory can make expressing and working with this class of thing (and many others) very expedient.
It's a bit like working in a framework that has great primitives for the stuff you do a lot. Like think dependency injection for constructing instances. Saves you tons of coding time in the long run.
Of course, this point of view is probably hard to appreciate from outside the priesthood =P.
Yes it did, except all of those examples where it convincingly helped were super-advanced, very hard to comprehend, and lacking in non-niche applications to CS or physics.
There is a category theory "school of thought" in certain subjects, but it's usually a speculative belief in the importance of category theory.
Improving communication between experts in a niche discipline is also a value-add, in my opinion (and my experience, for whatever that's worth). There's no need for it to have mass appeal to be useful.
You are very close. CT is about structure, not which problem this structure solves. Compilers are closest in what i can think of in this regard: They don't resolve one problem domain, but many. Which one you apply it on is up to you.
One tool for one job is a simple rule you can adapt as a systems architect allowing you to build clear structure for the problem domain you come across. esbuild comes to mind as an example - the job was solved before, but keeping one purpose in mind and writing it from scratch solves the problem WAAAY faster.
So no, no problem is solved inside the domain of product software development, but outside of it, you as a developer can (if you want and for speed) derive any structure from the absurd function instead of combining foreign frameworks.
From Milewski: "That’s because category theory — rather than dealing with particulars — deals with structure. It deals with the kind of structure that makes programs composable."
And he is right, because morphisms may or may not preserve structure. If you want to nitpick and create structure from the absurd function morphism - then yeah, so I think a discussion about this gets tedious. The more you look into the matter the more structure / data and morphisms merge and your point feels more like an invitation for the newbies to have a mental breakdown.
Brainfuck is turing complete, why would we worry about any other structure preserving compilers? Brainfuck will do just fine /s
CT is outside most problem domains in computation, as its outside the time and space constraints of a machine. Knowing whether a program will never finish is part of CT for software developers. So handling this case is a maybe in CT while it's a must in software (running endlessly means crashing).
I've heard of some database migration tooling that uses category theory to compute robust data transformations that are automatically composable to achieve the desired outcome.
There has been seen some research into the fundamentals of machine learning, using category theory approaches for computing the compositions of transformations of expressions. E.g.: simultaneously computing a gradient, the "bounding box" of the error, and other similar derivatives to improve the robustness of gradient descent.
My cynical understanding of category theory is that it's the mathy Peter principle: category theory is when meta mathematics starts to lose all value. Except two professional mathematicians, of course, in which case the value is almost purely economic. "He who is employed to teach something that cannot be understood will always have a job."
I think category theory is useful, but not yet in computing.
I suspect it's hard if you don't really need it for anything. Do you really need to understand universal properties and adjoint functors and the Yoneda lemma? If you don't, you'll struggle to learn what those are.
Interestingly, experience in functional programming can help you understand category theory, but not so much the other way round. For instance: Parametric polymorphism gives you intuition for natural transformations. And natural transformations are central to every application of category theory.
The convincing applications of category theory are very mathematical. You'll find them in algebraic topology, representation theory, algebraic geometry, and non-classical logic.
Quantum computing is still a bit niche, no? And graphical notations already existed in physics and quantum computing, I believe. What does the category theory do here, except reformulate things that experts already understood, but in category theory language?
I think a convincing application of category theory should involve doing calculations with category theory concepts and definitions, which involve things like: commutative diagrams, representable functors, universal properties, adjoint functors. If a whole heap of these concepts doesn't get used - and you don't perform calculations with them - then you're just reformulating something using different terminology.
In module theory, which is a bit close to ZX, category theory gets used in a big way to calculate things: See left/right exact functors, co/contravariant functors, derived functors, the hom-tensor adjunction, etc.
would you say that 3 reformulates 1+1+1 in another language? because if yes, such reformulations shouldn’t be disregarded just because they’re “reformulations”. so we can say there are kinds of reformulations which make things incredibly easier, and category theory is one of them
Why do you think category theory makes it easier? That's one thing category theory does not do. How well do you understand the subject?
There is a category theory "school of thought" in many subjects, which believes without solid evidence that category theory must be immensely useful to their subject. But it's often just that: a school of thought. This is the situation of category theory in CS. My concern is that people aren't being honest about this.
you’re right, and I’m in that school of thought, but mainly for math. In essence, I know people who use it in their reaearch and I trust them (and there’s a lot of good people doing good stuff with CT—it’s ubiquitous, so I’m fine with trusting them). I have a beginner understanding of CT, actually I finally started studying it seriously a month ago, after being exposed to it sporadically for some time. I’m not claiming anything about CT in CS, but I’m optimistic about that it can have a more widespread positive effect in it
> Modus ponens is a proposition that is composed of two other propositions (which here we denote A and B) and it states that if proposition A is true and also if proposition (A --> B) is true (that is if A implies B), then B is true as well. For example, if we know that “Socrates is a human” and that “humans are mortal” (or “being human implies being mortal”), we also know that “Socrates is mortal.”
This example is not an instance of Modus ponens (a rule of propositional logic), it is rather a categorical syllogism, which require predicate logic.
This says “Logic is the science of the possible” but wouldn’t logic be the science of the definite? I mean the whole point is to be able to definitely say what is or isn’t valid, definitely.
This is such a great page, i love it and came across it multiple times while studying the matter.
I still would vote to learn it from Milewski though. Learning this is a journey and the author of ct-illustrated is, i think, still in the middle of it.
Milewski has made that trip multiple times already. The book and his blog are great places.
https://github.com/hmemcpy/milewski-ctfp-pdf Book https://bartoszmilewski.com/2014/10/28/category-theory-for-p... Blog
> I still would vote to learn it from Milewski though
I've read the first dozen chapters of Milewski so far, and while I really enjoyed the first couple chapters, his style of not giving precise definitions or statements, nor using precise notation becomes really annoying after a while and makes the book practically useless as a reference. He seems to think everything is easier to understand when it's written in lighthearted, imprecise prose.
No, not all.¹
¹) https://news.ycombinator.com/item?id=41756286
You might want to look at https://www.artima.com/shop/modern_mathematics
Thanks!
Yep, this is precisely the problem I had with Milewski. It would have been nice if he included examples and definitions in a standard format.
I don't understand what bartoszmilewski talked about, so his book seems useless to me. But at work i use category theory for all of my domain model.
Oh, i can solve this: You will understand his work better once you step out of the narrow category of your work domain model.
He uses math term to explain the math. No way i will use that method to understand the subject.
You might be looking for a kind of understanding that doesn't exist in math. John von Neumann said, "In mathematics, you don't understand things. You just get used to them." So even one of the greatest mathematicians of all time didn't feel like he "understood" concepts that he was adept at applying to both abstract and real-world problems.
This is why there are monad tutorials that use dozens of different analogies for monads, and nobody feels like they understand anything better after reading them. But if you use monads enough, eventually you get used to how they work and how to accomplish things with them. And then you stop caring if you "understand" them.
If it is intuitive understanding you’re talking about then I feel that there are things in math that are more intuitive and less intuitive and harder to grasp.
Speak for yourself, monads are easily understood from the philosophical pov: https://en.wikipedia.org/wiki/Monad_(philosophy) which they are derived from in category theory, so maybe go that way if you still need foundation for the concept of the concept that bore all concepts?
Category theory does not derive the idea of a monad from philosophy, it merely reuses the word for an entirely unrelated purpose. It's like how a frog [1] to a farrier has nothing to do with the frog [2] from everyday use, nor to the frog [3] from railroading.
[1] https://en.wikipedia.org/wiki/Frog_(horse_anatomy)
[2] https://en.wikipedia.org/wiki/Frog
[3] https://en.wikipedia.org/wiki/Frog_(disambiguation)#Railroad...
Easily understood? Looking at that link, the term has been used by many different philosophers with many different meanings. Leibniz's monads are the most famous, and they have nothing in common with category theory monads except the name.[0]
The mathematical concept was in use for years before anybody decided to call it a "monad" anyway. According to Wikipedia: "The notion of monad was invented by Roger Godement in 1958 under the name 'standard construction'. Monad has been called 'dual standard construction', 'triple', 'monoid' and 'triad'. The term 'monad' is used at latest 1967, by Jean Bénabou."[1]
[0] https://en.wikipedia.org/wiki/Monadology#Summary
[1] https://en.wikipedia.org/wiki/Monad_(category_theory)#Termin...
So what? If you call it god around 0 BC and monad in the 1970 it's still the same concept. If the notation and meanings change in your pov and that gives you headaches, that's maybe related to the fact, that you cannot call an absurd function, but you can derive structure from combining it.
In other words, God is just a monoid in the category of endofunctors‽
Is this taken word by word from wikipedia?
A monad is an entity that has no purpose other than relation to itself, but can be part of a structure that employs functionality or meaning. I think the problems around this concept for most is, that causality is not a requirement yet, so things don't have to make sense to be part of constructing upon them.
An analogy in IT might be, that worrying about traversal of a graph is a problem domain in time and space and causality that involves questions around loop detection and recursion because you deal from a pov that has those constraints. But that's not a problem yet in the domain of structure as it.
This took some brain, so i hope it's a good try on explanation.
>> In other words, God is just a monoid in the category of endofunctors‽
> Is this taken word by word from wikipedia?
You're either very naive or explaining yourself very poorly.
Both likely - care to explain further?
> Leibniz's monads [...]have nothing in common with category theory monads except the name.[0]
The monad is something about which you can reason but you cannot look inside because it is windowless. No quibbling, please.
How does it make it easy?
That’s like saying to understand object oriented programming one just needs to study Plat’s theory of forms.
Its the point where you start from in CT. Having the problem of not being able to talk about a concept is the absurd function and part of Chapter 1 (!!) in books about category theory. So instead of circling around this topic, welcome, this means you are inside the problem domain of category theory ;)
Not such a bad idea TBH
As far as I can tell this is totally unrelated to the concept of monad in mathematics or programming.
Yeah, but that's kinda the point i made in my original post: You can work towards the general math terms while translating your domain model into higher abstractions, but this will have consequences on your personal life. You can see this clearly in the ct-illustrated author that interweaves his struggle while learning the matter and wrote a journal about both his life and the topic.
As Milewski teaches the subject, he is in a position to match both personal life, work and the topic into one book, so this is the goal, but it will entail you leaving the narrow perspective of your work. He can talk from math to math, but you have to - similar to the author of ct-illustrated - find your own way from translating your specific model into a more general one.
If you want a handle to start: Think of a line that extends into + and - infinity: It will create a circle. Even though a circle is a 2 dimensional object, it's made of infinetly many distinct 1-dimensional lines. The same is true for your work: Even though it is made out of a distinct countable number of interwoven objects in your domain, this structure is part of a bigger abstraction. Doing more work, like drawing more distinct new objects will not take you onto the more general abstraction, but thinking about what all your domain models have in common.
Try to work less, be lazy and the underlying mathematical models will appear in a language you already speak. In your case: Abstraction your work domain model into multiple clients (1 employer --> multiple employers in a similar line of work) will a) lead into your termination b) show how you can abstract your work away from 1 source of income and purpose into multiple ones.
Good luck!
A circle is a 1-dimensional manifold (which, indeed, could be embedded in a space that has more dimensions). This follows exactly from your own (correct) image of it as being created from (i.e. being locally homeomorphic to) straight lines. But the entire thing ("the territory") always turns out to be more than the set of parts it is "built from," and if you ignore this, not only you are bound to lose the forest for the trees, your understanding of the whole may end up being completely wrong.
Would you like to elaborate more on the territory part? As far as i understood you talk about emergence of properties beyond the sum of the elements but might be mistaken. Im fine with links to libgen.is or lengthy pdfs if necessary.
I'm currently working on trees for a nested controller approach and constructing / deconstructing differing branches. So even though i don't feel stuck right now i'm always willing to take hints for taking another path.
Sorry, the word "territory" was merely an (unsuccessful) attempt at a pun on the notion of "atlas" used in the theory of manifolds. (In your example, the straight line segments are the "pages" in such atlas, and the circle - the manifold - is the "territory.")
It would seem to me that the properties of the circle are not emergent properties of a bunch of line segments; rather, the circle is something else entirely, whose shape is forced upon it by its own nature. Another example that comes to mind is the so-called "fiber bundles," whose own topology in general can be much more complex than that of a "trivial bundle" (simply the product of two spaces - the "base" and the "fiber"), although locally they all do look "trivial."
None of this, I am afraid, has much to do with what you said you are working on. In the realm of software development, the only emerging properties (or non-trivial topologies) are bugs.
Nah, that's totally fine, i for my part tried to make a joke about tree structure in a frontend/backend router. As there are multiple branches (subtrees) derived from different (single linked list) paths it's not a problem to go back to a node and diverge, it does not invalidate the tree. Im sure a lot of the joke in each part is lost in niche knowledge and translation.
As for manifolds, i am drawn to stuff like a hendecachoron and we have a new fountain in Chemnitz now called "Manifold" which is made from circles and i love it! Also the local architect of the town hall has the surname "Möbius".
So i'll take the hint to learn a bit more about topology!
Would you be willing to share more about your domain modeling and how category theory helped?
He can abstract more models in general frameworks (This is true for every job). He just is hesitant taking the leap to go all Camus and derive money from absurdism ;)
Discussed at the time, though with an other URL:
https://news.ycombinator.com/item?id=28660131 (2 comments)
https://news.ycombinator.com/item?id=28660157 (112 comments)
Starting from the beginning of the book, I came across this gem of a sentence, when speaking about math compared to science or engineering:
> Because of this, mathematicians are in a weird and, I’d say, unique position of always having to defend what they do with respect to its value for other disciplines. I again stress that this is something that would be considered absurd when it comes to any other discipline.
I think this is a concept that anyone who studied anything that does not directly lead to monetizable outcomes can relate to, but it's nice to hear even those whose gifts skew to the numeric also have to contend with "Milton Friedman's Razor".
Its a good thing, then, that so many projects in "cultural studies" are actually funded directly by the DoD and the State Department. Hell, the entirety of "post-colonial" studies today is nothing more than the backend of US soft-power (and, presumably, if a war breaks out, hard power as well).
Circles in circles doesn't really scale well if the inner circles are always vertically centered.
Any body can share a success story of using category theory gainfully to any CS/SWE problem that couldn't have been solved without? No Monads isn't one, you would invent it naturally when the situation calls for it. I spent a year studying in grad school and I ultimately abandoned it.
There are no problems that can't be modeled without category theory. One of the most foundational category theorems is the Yoneda Lemma, which directly states that any problem phrased in the language of categories can be translated to the language of sets and functions. The same is true of every mathematical object with a definition in terms of sets - you could always replace the name with its definition.
The contribution of category theoretic language to the implicit framework of a theory can't be larger than the definition of a category, which is very small. You could be asking "why use groups when sets with an associative operation exhibiting closure, an identity and an inverse are more approachable?" Abstract algebra is based on a library of definitions that refer to types of operations on sets that are simple enough to be common. A tool or a technique are not the kind of things you can find in a definition.
Rings, vector spaces and modules get a sort of instant acceptance for what they are, but categories have believers and disbelievers. I am curious about how that can happen.
It is because without vector spaces, you could not do linear algebra, which you need for everything. Without categories, you cannot do category theory, which you need for ... what exactly?
Of course you can do linear algebra without vector spaces. Leibniz didn't know about vector spaces, yet he was doing linear algebra. It just happens that the use of vector spaces massively helps thinking about linear algebra problems. CT is applied to many domains. For a concrete example look up ZX calculus, which is used to optimize quantum circuits.
> It just happens that the use of vector spaces massively helps thinking about linear algebra problems.
That is the point here.
> CT is applied to many domains.
Yes, but in which of those does it massively help? I just looked up ZX calculus, and I am sure you can formulate that better by not mentioning CT at all.
> I just looked up ZX calculus, and I am sure you can formulate that better by not mentioning CT at all.
Based on what? Sorry, but what you're saying is beyond arrogant (considering you have zero knowledge of the subject).
As usual, quick googling reveals that the ZX calculus is really just a CT repackaging of Penrose diagrams, which were discovered and used without any involvement of CT.
You may be interested to know that I made an earlier comment about the ZX Calculus in the thread.
I can accept that meaning of the word necessary for the purposes of discussion. Category theory is needed to compose functions.
I can compose functions just fine without category theory. I do it all day long, in fact.
The closest I know of is the work on UMAP. I interviewed Leland McInnes who explained to me in detail how category theory was a big part of helping him connect the dots, even though the final result does not strictly need it in the actual code. Given the relative improvement over the previous state of the art (t-SNE), it's the only example that really makes me reconsider my poo-pooing the way category theory is discussed in software.
https://arxiv.org/abs/1802.03426
For an alternative POV, quite critical of UMAP, see: https://doi.org/10.1371/journal.pcbi.1011288
Yes, but I think that it can stand as an answer to the OP regardless of the techniques status as useful for particular domains
I felt UMAP and its limitations were explained in a friendly way here https://topos.site/blog/2024-04-05-understanding-umap/ - written by someone who understands the category theory but is able to explain it without using any category theory.
Fascinating.
More on the topological data analysis:
http://outlace.com/TDApart1.html
"Anybody can share a sucess story of using a car to go places I couldn't have gone on foot"?
CT is a language and a tool, meaning anything you can say in the language of CT you can say in other languages.
Like cars, if you learn how to drive it (and this one has a very steep learning curve), it gets you places faster, but you there's in principle nothing stopping you from going there walking, i.e. without specifically referring to any CT concepts.
At the Topos Institute we're working on some new software that we're hoping will be a lot more transparent to people who haven't drunk the CT Kool-Aid yet; the current pre-alpha is mainly for systems dynamics modeling but the category-theoretic basis is, to my mind, indispensable for the range of tasks we're targeting. I'd be very happy to hear anybody's thoughts! https://topos.site/blog/2024-10-02-introducing-catcolab/
Reformulating something you already understanding in a more general framework can give more insight into what it really means, isolate the essence from messy details. From my very limited understanding of it, characterising an object with universal properties is an important part of category theory.
Another practical utility of category theory is providing a common language for computer scientists, mathematicians and physicists to speak. You can imagine collaboration is not easy when everyone calls the same pattern with different names with slightly incompatible definitions that requires you to understand unfamiliar theories.
> is providing a common language for computer scientists, mathematicians and physicists to speak
The cat theory framework is too high level to usefully exchange ideas between these fields. The consensus in academia seems to be that it is a nice "party trick" framework that has very limited insights or expressiveness in actual physics/CS problems.
Historically, Category Theory was developed to formalize and better understand some deep methods used in mathematics. Like much of mathematics, it "automated" some types of reasoning, opening possibilities that did not (practically) exist before. There are some areas today that cannot even be properly understood without thinking in categorical terms.
I would like to second this take, as someone who found it very useful while writing up my masters (in logic). When you're working with objects that satisfy lots of universal laws (particularly algebras, logics, stuff like that) CT can be used to remove tons of "boilerplate" in your proofs for things that are basically trivial/routine but a pain to state and prove formally. As an example, imagine a construction where you extend the underlying language of some logic (like adding additional constant symbols in, for instance). Strictly speaking, to "transfer" results into this new logic you need to do a bunch of tedious proof to show that everything goes through the way you would expect. Category theory can make expressing and working with this class of thing (and many others) very expedient.
It's a bit like working in a framework that has great primitives for the stuff you do a lot. Like think dependency injection for constructing instances. Saves you tons of coding time in the long run.
Of course, this point of view is probably hard to appreciate from outside the priesthood =P.
Yes it did, except all of those examples where it convincingly helped were super-advanced, very hard to comprehend, and lacking in non-niche applications to CS or physics.
There is a category theory "school of thought" in certain subjects, but it's usually a speculative belief in the importance of category theory.
Improving communication between experts in a niche discipline is also a value-add, in my opinion (and my experience, for whatever that's worth). There's no need for it to have mass appeal to be useful.
You are very close. CT is about structure, not which problem this structure solves. Compilers are closest in what i can think of in this regard: They don't resolve one problem domain, but many. Which one you apply it on is up to you.
One tool for one job is a simple rule you can adapt as a systems architect allowing you to build clear structure for the problem domain you come across. esbuild comes to mind as an example - the job was solved before, but keeping one purpose in mind and writing it from scratch solves the problem WAAAY faster.
So no, no problem is solved inside the domain of product software development, but outside of it, you as a developer can (if you want and for speed) derive any structure from the absurd function instead of combining foreign frameworks.
> CT is about structure
No, this is exactly what CT is not about. (It is about morphisms.)
From Milewski: "That’s because category theory — rather than dealing with particulars — deals with structure. It deals with the kind of structure that makes programs composable."
And he is right, because morphisms may or may not preserve structure. If you want to nitpick and create structure from the absurd function morphism - then yeah, so I think a discussion about this gets tedious. The more you look into the matter the more structure / data and morphisms merge and your point feels more like an invitation for the newbies to have a mental breakdown.
Determining whether something is useful because it’s the only way that a problem can be solved is quite a high bar.
We could say the same about computers in general.
Admittedly even with a less stringent criteria I don’t have any examples. So I understand your point
Brainfuck is turing complete, why would we worry about any other structure preserving compilers? Brainfuck will do just fine /s
CT is outside most problem domains in computation, as its outside the time and space constraints of a machine. Knowing whether a program will never finish is part of CT for software developers. So handling this case is a maybe in CT while it's a must in software (running endlessly means crashing).
I've heard of some database migration tooling that uses category theory to compute robust data transformations that are automatically composable to achieve the desired outcome.
There has been seen some research into the fundamentals of machine learning, using category theory approaches for computing the compositions of transformations of expressions. E.g.: simultaneously computing a gradient, the "bounding box" of the error, and other similar derivatives to improve the robustness of gradient descent.
> any CS/SWE problem that couldn't have been solved without?
Any computing problem that could be solved with category could be solved by brainfuck.
My cynical understanding of category theory is that it's the mathy Peter principle: category theory is when meta mathematics starts to lose all value. Except two professional mathematicians, of course, in which case the value is almost purely economic. "He who is employed to teach something that cannot be understood will always have a job."
It is really not useful at all in software engineering except possibly in some very niche case.
abstractions never solve problems that couldn't have been solved without them.
Sure; it's just that doing calculations with the Roman numerals takes longer.
Roman numerals are too abstract a representation of counting. I use potatoes.
~”numbers are the abstract notion, the primitive way of counting is a bijection” W. Lawvere
so the way people use “abstraction” sounds more like they are saying “a thing we (we think) are not used to”
[dead]
I think category theory is useful, but not yet in computing.
I suspect it's hard if you don't really need it for anything. Do you really need to understand universal properties and adjoint functors and the Yoneda lemma? If you don't, you'll struggle to learn what those are.
Interestingly, experience in functional programming can help you understand category theory, but not so much the other way round. For instance: Parametric polymorphism gives you intuition for natural transformations. And natural transformations are central to every application of category theory.
The convincing applications of category theory are very mathematical. You'll find them in algebraic topology, representation theory, algebraic geometry, and non-classical logic.
It is useful here:
https://en.m.wikipedia.org/wiki/ZX-calculus
https://zxcalculus.com/
https://www.reddit.com/r/quantum/s/2NzsJaDYwm
Quantum computing is still a bit niche, no? And graphical notations already existed in physics and quantum computing, I believe. What does the category theory do here, except reformulate things that experts already understood, but in category theory language?
I think a convincing application of category theory should involve doing calculations with category theory concepts and definitions, which involve things like: commutative diagrams, representable functors, universal properties, adjoint functors. If a whole heap of these concepts doesn't get used - and you don't perform calculations with them - then you're just reformulating something using different terminology.
Thanks anyway for the link. Very pretty.
I wouldn't call "niche" something some big companies have been spending billions of dollars on.
In module theory, which is a bit close to ZX, category theory gets used in a big way to calculate things: See left/right exact functors, co/contravariant functors, derived functors, the hom-tensor adjunction, etc.
But!! I'm not seeing ANY of that in ZX.
would you say that 3 reformulates 1+1+1 in another language? because if yes, such reformulations shouldn’t be disregarded just because they’re “reformulations”. so we can say there are kinds of reformulations which make things incredibly easier, and category theory is one of them
Why do you think category theory makes it easier? That's one thing category theory does not do. How well do you understand the subject?
There is a category theory "school of thought" in many subjects, which believes without solid evidence that category theory must be immensely useful to their subject. But it's often just that: a school of thought. This is the situation of category theory in CS. My concern is that people aren't being honest about this.
you’re right, and I’m in that school of thought, but mainly for math. In essence, I know people who use it in their reaearch and I trust them (and there’s a lot of good people doing good stuff with CT—it’s ubiquitous, so I’m fine with trusting them). I have a beginner understanding of CT, actually I finally started studying it seriously a month ago, after being exposed to it sporadically for some time. I’m not claiming anything about CT in CS, but I’m optimistic about that it can have a more widespread positive effect in it
There is an error:
> Modus ponens is a proposition that is composed of two other propositions (which here we denote A and B) and it states that if proposition A is true and also if proposition (A --> B) is true (that is if A implies B), then B is true as well. For example, if we know that “Socrates is a human” and that “humans are mortal” (or “being human implies being mortal”), we also know that “Socrates is mortal.”
This example is not an instance of Modus ponens (a rule of propositional logic), it is rather a categorical syllogism, which require predicate logic.
This says “Logic is the science of the possible” but wouldn’t logic be the science of the definite? I mean the whole point is to be able to definitely say what is or isn’t valid, definitely.
Interesting diagrammatic notation. Does the author give inference rules for truth preserving transformations of diagrams?
Oh i just realized the blog author himself created this post: Boris we love you! Best of luck on your journey and thanks for your work!
Touching. Thanks. And feel free to create some Github issues or write to me if you have some feedback.