I spent many hours many years ago studying CT and I was left very frustrated. Endless definitions without any meaningful results or insights. My (admittedly cartoonish) takeaway from Category Theory was: "We can think of all math as consisting of things and relationships between things!"

Like, ok?

It seems to appeal to computer science folks who want to level up their math chops, enjoy an exclusive "boutique" branch, and long for grand unification schemes. But IMHO it simply doesn't deliver. It's a whole lotta talk to say nothing.

I always compare with abstract algebra, where you start with a basic definition of groups and within a few pages you reach Lagrange's Theorem and now you understand something amazing about primes and sets, and you can keep squeezing and twisting the basic definitions for ever-more complex and brain-punishing results.

I've been saying this in this thread, but Grothendieck used the language of category theory to revolutionalize algebraic geometry. All model algebraic geometry is written in the language of category theory. The best example of a specific problem solved using category theory is Grothendieck's proofs of the Weil Conjectures. In particular the proof uses etal cohomology which in it's definition uses category theory.

If you can get to the point of understanding this stuff, which admittedly takes a long as time, I promise you that you'll appreciate category theory, lol. I didn't understand the significance of the Weil conjectures until my 4th year in a math phd program where I studied number theory.

The problem (at least partly) is that it's possible to read about and understand the statements in category theory long before you can understand any of the elegant and great uses of it, great uses of it in pure mathematics at least. I'm not familiar with the uses of category theory in computer science.

Categories are very similar to groups. If you can see how groups come up often and so it can be useful to think about the concept of groups, it's exactly the same thing with categories.

A group models the invertible maps from one structure to itself. A groupoid models the invertible maps between a number of structures. A monoid models the maps from one structure to itself (without presuming invertibility). A category models the maps between a number of structures (without presuming invertibility).

Again though, where are the derived results? I found very very few in CT, where other branches are replete. Back to my example, abstract algebra has one derived result after another.

CT is just... statements. It's "observational." I suppose maybe one should simply understand that going in, that there are no (or very few) actual results, it's just a set of descriptors.

There are many results, but whether you would find them interesting is up to you. (I don't even know what it would mean for a result to not be a "statement" or "observation". Every group-theoretic result is a statement of an observation, and for that matter, the Riemann Hypothesis is a statement too.)

Since you mention Lagrange's theorem, I presume you are looking for introductory results rather than advanced results. E.g., the result that a functor has at most one left adjoint up to isomorphism (a direct generalization of the fact that inverses are unique in groups). Or the result that a functor which has a left adjoint automatically preserves limits (and the near converse to this given by the Adjoint Functor Theorem). Or the theorem that a category has a limit for the entire category if and only if it has an initial object. Or the Yoneda lemma, which is a direct generalization of Cayley's theorem for groups. Or Lambek's theorem that every initial algebra is invertible.

What is a group-theoretic result of interest to you, apart from the aforementioned Lagrange's theorem? It might in particular be good to see a group-theoretic result which you find interesting even outside the context of finite groups (as there is not such an emphasis on considering finiteness in typical category theory, though one could of course say things specifically about finite categories too).

In particular, you seem fond of the application of Lagrange's theorem to deriving facts about prime numbers.

Presumably, what you have in mind here is the proof of Fermat's little theorem by saying "The integers modulo prime p under multiplication form a monoid of size p, among which the invertible subgroup comprises precisely the values not equivalent to zero (thus, of size p - 1). Within these, the order e of any value x is equivalent to the size of the subgroup x generates, which by Lagrange's theorem (decomposing the entire group into its cosets; i.e., its isomorphic orbits under the subgroup acting freely upon it) must divide the size of the overall group. Thus, x^(p - 1) = 1 mod p".

Yes, you can say all that in group-theoretic language. But someone skeptical of the value of group theory might object "All you are doing is using fancy words to say what could be said more straightforwardly. After all, Fermat's little theorem was understood by Fermat and others two hundred years before group theory was invented. Euler published his generalization of Fermat's little theorem a hundred years before group theory was invented. All your fancy-schmancy group theory is just pointlessly obfuscatory language for describing something already well understood".

And indeed, all group theory is doing in this case is showing us how to relate this one particular result to many other similar results. But that is the value of abstraction and generalization. That is how math often works. Recognizing a common pattern that comes up repeatedly, making particular instances obvious in retrospect, through fluently understanding their shared connection, once and for all. That's how it worked for groups, and that's the same way it works for categories.

Have you tried learning something like algebraic topology, or representation theory, or anything that actually uses it (and for which it’s truly indispensable)?

I think if you do, you’ll discover quickly why category theory is so nice — and useful! Its origins aren’t in staring at a blank sheet of paper and trying to come up with needlessly complicated notations and idea; it comes only after centuries of observation and gradual pattern recognition.

Conversely: if you don’t, I don’t think you’ll ever get it. At least not without much difficulty.

It’s a bit like set theory: basically everyone uses it because it’s such a useful paradigm; hardly anyone studies it for its own sake. Those who do can tell you ‘theorems’ of set theory, and those who do the same for category theory can tell you ‘theorems’ of category theory. The rest of us use it as a convenient (and mind-expanding) language.

By the way, you shouldn’t consider it to be some kind of ontological theory (or even foundation) for mathematics. It makes no attempt to tell us what there is, or even particularly what is true; it gives us a way of simultaneously generalising thousands of definitions and facts, allowing us to see commonalities between a priori only distantly related concepts and therefore further generalisations more easily.

It's just a lot of words for saying that you use [fill in the blank with your language of choice's words, like classes or interfaces] to wrap your classes with such that the wrappers provide methods that let you do a variety of things:

- associate immutable contextual stuff with your data
(think configuration, in C you'd add a context object
pointer argument to all your functions, but in Haskell
you'd just wrap your objects in monads)
- associate "mutable" contextual stuff with your data
(think state, in C you'd add a context...)
- abstract out I/O so you can write fully deterministic
code that's easy to test
- etc. (e.g., tracing, logging, transactions,
backtracking, and whatever else you can imagine)

along with standard methods that allow you to chain "statements" into a single expression while still looking like it's "statements".

You don't need category theory to understand how type-structure-preserving wrappers work in software development, and people invented them without knowing about category theory.

How much of a background do your have in abstract algebra? I don't think category theory makes much sense or is very satisfying unless you can already list a bunch of categories you interact with regularly, and have results which can be expressed in terms of categories.

If you’re looking for ‘something you can only prove using category theory’, you’ll probably not be able to find much. If you’re looking for ‘something commonly proven using category theory, whose proof without invoking such theory is much less elegant and general’, you’ll find plenty in any area of mathematics that uses category theory.

There’s a whole section of Emily Riehl’s book Category Theory In Context on ‘theorems in category theory’ (since it’s famously said that there are none — you’re certainly not the first to level such an accusation!).

You won't get much insight from a study period measured in hours. CT has actually been incredibly beneficial for computer science. Monads were first used to give denotational semantics to imperative languages. CT has also influenced Haskells design, giving many general and consistent abstractions across it's libraries. Looking ahead, CT is teaching us how to build software in total languages.

It has struck me for a while that associativity can be seen as a higher-order commutativity. Specifically, for the real numbers, it's associativity just says that you can commute the order in which you evaluate the multiplication map.

To make that more explicit, consider the space of left-multiplication maps Lr(x) = rx for real numbers r and x. Similarly, for right-multiplication maps Rr(x) = xr. Then the associative rule a(xc) = (ax)c can be re-written La∘Rc(x) = Rc∘La(x), which is precisely a commutativity relation.

The above obviously generalizes to arbitrary Abelian groups, but I'm curious if there's more depth to this idea of associativity as "higher-order commutativity" than just what I've said here.

The distributivity argument is a bit sketchy, though. It defines g.p := p▫q, meaning that g implicitly depends on q, but then writes f(y▫z) = f(g.y) and fy▫fz = g.f(y), where in the first instance g.x = x▫y and in the second g.x = x▫fz.

That said, I do see the point. The distributive rule let's us commute the order in which we compute addition and multiplication. Generally, I guess if we have some family of arrows A and B, where all commutative diagrams Bx∘Ay = Az∘Bw hold, then we can say "A and B commute". This can probably be expressed as some literal commutativity using natural transformations, but I'll need to think on it some more.

Yes: the requirement of commutativity in the "addition" for distributivity to work with a noncommutative "multiplication" is exactly why my current model for code* is based on an (anathema to practicing programmers) unordered choice.

* and also data; with suitable choices of definitions there are very few differences between the left and right hand arguments to application.

Is there any Category Theory tutorial that illustates that we need this apparatus to solve some real problem? For example, for Group Theory there is an excellent book 'Abel's theorem in problems and solutions' that does exactly that.

> that illustates that we need this apparatus to solve some real problem?

I think your question is wrong in a sense. Category theory is one of several languagues of mathematics, and there are analogies between them. It's kinda like asking "is there a computer program that requires to be written in C?"

So I think there is an element of taste whether you prefer category theory to some other (logical) language. That being said, just like in programming, it's still useful to know more than one language, because they potentially have different strengths.

Modern algebraic topology, especially homological algebra, more or less requires category theory... intro textbooks such as Rotman's will contain primers on category theory for this reason.

I'd say Grothendieck's proofs of the Weil Conjectures is a good example. His proof uses etale cohomology and the definition of etale cohomoly uses Category Theory in a fundamental way. From the etale cohomoly wikipedia page https://en.wikipedia.org/wiki/%C3%89tale_cohomology

"For any scheme X the category Et(X) is the category of all étale morphisms from a scheme to X. It is an analogue of the category of open subsets of a topological space, and its objects can be thought of informally as "étale open subsets" of X. The intersection of two open sets of a topological space corresponds to the pullback of two étale maps to X. There is a rather minor set-theoretical problem here, since Et(X) is a "large" category: its objects do not form a set."

There's a lot of advanced math in that paragraph, but it should be clear that Category Theory is needed to define etale cohomology.

One application I like is the use of the Seifert-van Kampen theorem to prove that the fundamental group of the circle (S^1) is isomorphic to Z. While category theory is not strictly needed to prove this (you can compute pi_1(S^1) using R as a cover in a way that is purely topological, see Hatcher "Algebraic Topology"), if one states the Seifert-van Kampen theorem for groupoids (this uses category theory through the notion of a universal property/pushout) one can compute pi_1(S^1) largely algebraically just from the universal property - in fact you can go through the whole proof without mentioning a homotopy once (see tom Dieck "Algebraic Topology" section 2.7).

This might not meet your criterion exactly, as one can extract a more topological proof and relegate the category theory to a non-essential role, but this requires some more effort and is a harder proof. So I do think it still illustrates that the category theoretic approach does add something beyond just a common language.

As far as I understand, fundamental groups were defined by Poincare in 1895. And functors in category theory are a generalisation of this idea (i.e. proving something for fundamental groups and then relating this back to topological spaces). So your example sounds backwards to me.

Putting topology aside, and recognizing that 'ease' is subjective, imo Moggi's use of monads to model the denotational semantics of I/O in lazy functional languages such as Haskell is a common textbook example; the creators of Haskell had tried many solutions that did not work in practice before monads cracked it open. Even now this solution is more widely adopted than the alternatives (streaming I/O, linear I/O types, etc) and Moggi's paper remains a classic.

It's annoying that you need so much math to understand the utility of Category Theory. I learned a bunch of Category Theory before I ever saw it used in a useful way.

Grothendieck wrote modern Algebraic Geometry in the language of Category Theory. This is the first time I saw Category Theory really used in a useful way. Grothendieck's proofs of the Weil conjectures I would say is a good example of using Category Theory to solve a famous problem. Category Theory is used to define and work with etale cohomology and etale cohomology plays a fundamental role in Grothendieck's proofs of the Weil conjectures.

That's not an application of category theory. The important theorem here is that the fundamental group is a functor, plus computations of the fundamental group of the disc and the circle. But that's a theorem from topology, not a theorem from category theory. Category theory is merely used as a language, to give the proof a structure.

By that I don't want to say category theory is useless. But regarding the video it's neither necessary, nor an application of category theory.

I found the series of Category Theory lectures by Bartosz Milewski[1] extremely helpful and approachable. It indroduces the abstract concepts of category theory while giving concrete examples of those concepts and tying some key concepts back to properties of types in programming languages.

I haven't dug far into CT. I'm slowly making my way through Modern Foundations of Mathematics (Richard Southwell) [1] that was posted here recently.

That said, two comments:

1) The definition of a category is just objects, arrows, and composition. If you're looking for more features, you might be disappointed. (If you've grown up with 'methods' rather than arrows, then you don't necessarily have composition.)

Writing your logic with objects and arrows is just damn pleasant. If I have bytes, and an arrow from bytes to JSON, then I have JSON. If I have also have an arrow from JSON to a particular entry in the JSON, then by the property of composition, I have an arrow from bytes to that entry in the JSON.

2) The various CT structures are re-used over and over and over again, in wildly different contexts. I just read about 'logict' on another post. If you follow the link [2] and look under the 'Instances' heading, you can see it implements the usual CT suspects: Functor, Applicative, Monad, Monoid, etc. So I already know how to drive this unfamiliar technology. A few days ago I read about 'Omega' on yet another post - same deal [3]. What else? Parsers [4], Streaming IO [5], Generators in property-based-testing [6], Effect systems [7] (yet another thing I saw just the other day on another post), ACID-transactions [8] (if in-memory transactions can count as 'Durable'. You don't get stale reads in any case).

They're also widespread in other languages: Famously LINQ in C#. Java 8 Streams, Optionals, CompletableFutures, RX/Observables. However these are more monad-like or monad-inspired rather than literally implementing the Monad interface. So you still understand them and know how to drive them even if you don't know all the implementation details.

However what's lacking (compared to Haskell) is the library code targeting monads. For example, I am always lacking something in Java Futures which should be right there: an arrow I can use to get from List<Future<T>> to Future<List<T>>. In Haskell that code ('sequence') would belong to List (in this case 'Traversable' [9]), not Future, as it can target any Monad. This saves on an n*m implementation problem: i.e. List and logict don't need to know about each other, vector and Omega don't need to know about each other, etc.

Perhaps not in Java but in C# it is quite common to do

var tasks = names.Select(GetThingAsync); // IEnumerable<Task<Thing>>
var things = Task.WhenAll(tasks); // Task<Thing[]>, await to get Thing[]

Can mix and match with other LINQ methods. There are other useful methods like Parallel.ForEachAsync, .AsParallel (Java has a similar abstraction) and IAsyncEnumerable for asynchronously "streaming back" a sequence of results which let you compose the tasks in an easy way.

Maybe what you wrote is clear to other LINQ users, but as a short feedback, your point is not easily understandable for people who don't know much about it.

As in, from outside, it looks like you are in there way deep, but it might need some more work to go and fetch the (assumed) target audience.

Because arrows are functions/mappings, and everything we do in programming involves arrows, even in languages where arrows aren't used as notation.

The common formulation is that a "monad is just a monoid in the category of endofunctors", which is not saying much but with big words, and the joke lies in understanding what it's saying. Bartosz Milewski has a lecture video series on youtube that's all about explaining that joke, and I highly recommend it because it's actually a wonderful CS lecture series.

As I understand it, it's a bit of an inside joke to minimize the complexity of mathematical structure. It's frequent use is along the same lines as the frequent use of "* Considered Harmful" in CS.

A semigroup is a set M and a function m: M x M -> M that is associative, m(a,m(b,c)) = m(m(a,b),c). Writing ab for m(a, b) this can be written a(bc) = (ab)c. A monoid is a semigroup with an identity e satisfying em = m = me for every m in M.
This can be used for map-reduce and pivot tables.

A (small) category is a partial monoid. The domain of m is the set of composable arrows.

As a non-mathematician, I'm confused how category theory relates back to formal systems. Is it its own formal system, or is it built on top of ZFC or type theory? If it's built on top of something else, what sort of algorithm would tell me whether the DAG that constitutes a proof from a set of axioms to a theorem involves category theory or not?

I spent many hours many years ago studying CT and I was left very frustrated. Endless definitions without any meaningful results or insights. My (admittedly cartoonish) takeaway from Category Theory was: "

We can think of all math as consisting of things and relationships between things!"Like, ok?

It seems to appeal to computer science folks who want to level up their math chops, enjoy an exclusive "boutique" branch, and long for grand unification schemes. But IMHO it simply doesn't deliver. It's a whole lotta talk to say nothing.

I always compare with abstract algebra, where you start with a basic definition of groups and within a few pages you reach Lagrange's Theorem and now you understand something amazing about primes and sets, and you can keep squeezing and twisting the basic definitions for ever-more complex and brain-punishing results.

I've been saying this in this thread, but Grothendieck used the language of category theory to revolutionalize algebraic geometry. All model algebraic geometry is written in the language of category theory. The best example of a specific problem solved using category theory is Grothendieck's proofs of the Weil Conjectures. In particular the proof uses etal cohomology which in it's definition uses category theory.

https://en.wikipedia.org/wiki/Weil_conjectures https://en.wikipedia.org/wiki/%C3%89tale_cohomology

If you can get to the point of understanding this stuff, which admittedly takes a long as time, I promise you that you'll appreciate category theory, lol. I didn't understand the significance of the Weil conjectures until my 4th year in a math phd program where I studied number theory.

The problem (at least partly) is that it's possible to read about and understand the statements in category theory long before you can understand any of the elegant and great uses of it, great uses of it in pure mathematics at least. I'm not familiar with the uses of category theory in computer science.

Categories are very similar to groups. If you can see how groups come up often and so it can be useful to think about the concept of groups, it's exactly the same thing with categories.

A group models the invertible maps from one structure to itself. A groupoid models the invertible maps between a number of structures. A monoid models the maps from one structure to itself (without presuming invertibility). A category models the maps between a number of structures (without presuming invertibility).

Again though, where are the derived results? I found very very few in CT, where other branches are replete. Back to my example, abstract algebra has one derived result after another.

CT is just... statements. It's "observational." I suppose maybe one should simply understand that going in, that there are no (or very few) actual results, it's just a set of descriptors.

There are many results, but whether you would find them interesting is up to you. (I don't even know what it would mean for a result to not be a "statement" or "observation". Every group-theoretic result is a statement of an observation, and for that matter, the Riemann Hypothesis is a statement too.)

Since you mention Lagrange's theorem, I presume you are looking for introductory results rather than advanced results. E.g., the result that a functor has at most one left adjoint up to isomorphism (a direct generalization of the fact that inverses are unique in groups). Or the result that a functor which has a left adjoint automatically preserves limits (and the near converse to this given by the Adjoint Functor Theorem). Or the theorem that a category has a limit for the entire category if and only if it has an initial object. Or the Yoneda lemma, which is a direct generalization of Cayley's theorem for groups. Or Lambek's theorem that every initial algebra is invertible.

What is a group-theoretic result of interest to you, apart from the aforementioned Lagrange's theorem? It might in particular be good to see a group-theoretic result which you find interesting even outside the context of finite groups (as there is not such an emphasis on considering finiteness in typical category theory, though one could of course say things specifically about finite categories too).

In particular, you seem fond of the application of Lagrange's theorem to deriving facts about prime numbers.

Presumably, what you have in mind here is the proof of Fermat's little theorem by saying "The integers modulo prime p under multiplication form a monoid of size p, among which the invertible subgroup comprises precisely the values not equivalent to zero (thus, of size p - 1). Within these, the order e of any value x is equivalent to the size of the subgroup x generates, which by Lagrange's theorem (decomposing the entire group into its cosets; i.e., its isomorphic orbits under the subgroup acting freely upon it) must divide the size of the overall group. Thus, x^(p - 1) = 1 mod p".

Yes, you can say all that in group-theoretic language. But someone skeptical of the value of group theory might object "All you are doing is using fancy words to say what could be said more straightforwardly. After all, Fermat's little theorem was understood by Fermat and others

two hundred yearsbefore group theory was invented. Euler published his generalization of Fermat's little theorema hundred yearsbefore group theory was invented. All your fancy-schmancy group theory is just pointlessly obfuscatory language for describing something already well understood".And indeed, all group theory is doing in this case is showing us how to relate this one particular result to many other similar results. But that is the value of abstraction and generalization. That is how math often works. Recognizing a common pattern that comes up repeatedly, making particular instances obvious in retrospect, through fluently understanding their shared connection, once and for all. That's how it worked for groups, and that's the same way it works for categories.

Have you tried learning something like algebraic topology, or representation theory, or anything that actually

usesit (and for which it’s truly indispensable)?I think if you do, you’ll discover quickly why category theory is so nice — and useful! Its origins aren’t in staring at a blank sheet of paper and trying to come up with needlessly complicated notations and idea; it comes only after centuries of observation and gradual pattern recognition.

Conversely: if you don’t, I don’t think you’ll ever get it. At least not without much difficulty.

It’s a bit like set theory: basically everyone uses it because it’s such a useful paradigm; hardly anyone studies it for its own sake. Those who

docan tell you ‘theorems’ of set theory, and those who do the same for category theory can tell you ‘theorems’ of category theory. The rest of us use it as a convenient (and mind-expanding) language.By the way, you shouldn’t consider it to be some kind of ontological theory (or even foundation) for mathematics. It makes no attempt to tell us what there

is, or even particularly what is true; it gives us a way of simultaneously generalising thousands of definitions and facts, allowing us to see commonalities between a priori only distantly related concepts and therefore further generalisations more easily.It's just a lot of words for saying that you use [fill in the blank with your language of choice's words, like classes or interfaces] to wrap your classes with such that the wrappers provide methods that let you do a variety of things:

along with standard methods that allow you to chain "statements" into a single expression while still looking like it's "statements".You don't need category theory to understand how type-structure-preserving wrappers work in software development, and people invented them without knowing about category theory.

How much of a background do your have in abstract algebra? I don't think category theory makes much sense or is very satisfying unless you can already list a bunch of categories you interact with regularly, and have results which can be expressed in terms of categories.

I'm fairly well-versed in AA, linear alg, topology, a few others. I got zero satisfaction from CT.

What would be the first interesting (if even mildly insightful) result in CT? I'll pull out my books and take a look...

If you’re looking for ‘something you can

onlyprove using category theory’, you’ll probably not be able to find much. If you’re looking for ‘something commonly proven using category theory, whose proof without invoking such theory is much less elegant and general’, you’ll find plenty in any area of mathematics that uses category theory.There’s a whole section of Emily Riehl’s book

Category Theory In Contexton ‘theorems in category theory’ (since it’s famously said that there are none — you’re certainly not the first to level such an accusation!).See my other reply. It makes some things way easier in real-world programs (and libraries).

You won't get much insight from a study period measured in hours. CT has actually been incredibly beneficial for computer science. Monads were first used to give denotational semantics to imperative languages. CT has also influenced Haskells design, giving many general and consistent abstractions across it's libraries. Looking ahead, CT is teaching us how to build software in total languages.

It has struck me for a while that associativity can be seen as a higher-order commutativity. Specifically, for the real numbers, it's associativity just says that you can commute the order in which you evaluate the multiplication map.

To make that more explicit, consider the space of left-multiplication maps Lr(x) = rx for real numbers r and x. Similarly, for right-multiplication maps Rr(x) = xr. Then the associative rule a(xc) = (ax)c can be re-written La∘Rc(x) = Rc∘La(x), which is precisely a commutativity relation.

The above obviously generalizes to arbitrary Abelian groups, but I'm curious if there's more depth to this idea of associativity as "higher-order commutativity" than just what I've said here.

I don't know if there's more depth to it but fwiw I think that's a really cool insight regardless.

See also https://www.cs.utexas.edu/~EWD/ewd11xx/EWD1142.PDF (which ties in distributivity)

Cool. A Dijkstra note. Thanks for sharing.

The distributivity argument is a bit sketchy, though. It defines g.p := p▫q, meaning that g implicitly depends on q, but then writes f(y▫z) = f(g.y) and fy▫fz = g.f(y), where in the first instance g.x = x▫y and in the second g.x = x▫fz.

That said, I do see the point. The distributive rule let's us commute the order in which we compute addition and multiplication. Generally, I guess if we have some family of arrows A and B, where all commutative diagrams Bx∘Ay = Az∘Bw hold, then we can say "A and B commute". This can probably be expressed as some literal commutativity using natural transformations, but I'll need to think on it some more.

Thanks for the prod!

This is likely the same as a distributive law of monads... https://ncatlab.org/nlab/show/distributive+law on good days i might understand this stuff..

Another less high-brow connection: associativity of matrix multiplication relies on the usual distributivity law..

Yes: the requirement of commutativity in the "addition" for distributivity to work with a noncommutative "multiplication" is exactly why my current model for code* is based on an (anathema to practicing programmers) unordered choice.

* and also data; with suitable choices of definitions there are very few differences between the left and right hand arguments to application.

Is there any Category Theory tutorial that illustates that we need this apparatus to solve some real problem? For example, for Group Theory there is an excellent book 'Abel's theorem in problems and solutions' that does exactly that.

> that illustates that we need this apparatus to solve some real problem?

I think your question is wrong in a sense. Category theory is one of several languagues of mathematics, and there are analogies between them. It's kinda like asking "is there a computer program that requires to be written in C?"

So I think there is an element of taste whether you prefer category theory to some other (logical) language. That being said, just like in programming, it's still useful to know more than one language, because they potentially have different strengths.

Hard disagree. Group theory, for example, is not just another language, but is absolutely necessary for proving Abel's theorem. So the question was:

Group theory -> Abel's theorem

Category theory -> ???

(below I got the answer 'Weil conjectures')

Modern algebraic topology, especially homological algebra, more or less requires category theory... intro textbooks such as Rotman's will contain primers on category theory for this reason.

That sounds good, but what's the easiest to state mathematical problem that requires Category Theory apparatus for the solution?

I'd say Grothendieck's proofs of the Weil Conjectures is a good example. His proof uses etale cohomology and the definition of etale cohomoly uses Category Theory in a fundamental way. From the etale cohomoly wikipedia page https://en.wikipedia.org/wiki/%C3%89tale_cohomology

"For any scheme X the category Et(X) is the category of all étale morphisms from a scheme to X. It is an analogue of the category of open subsets of a topological space, and its objects can be thought of informally as "étale open subsets" of X. The intersection of two open sets of a topological space corresponds to the pullback of two étale maps to X. There is a rather minor set-theoretical problem here, since Et(X) is a "large" category: its objects do not form a set."

There's a lot of advanced math in that paragraph, but it should be clear that Category Theory is needed to define etale cohomology.

Thanks again! It will take some time for me to digest, but at least now I know in which direction to look.

One application I like is the use of the Seifert-van Kampen theorem to prove that the fundamental group of the circle (S^1) is isomorphic to Z. While category theory is not strictly needed to prove this (you can compute pi_1(S^1) using R as a cover in a way that is purely topological, see Hatcher "Algebraic Topology"), if one states the Seifert-van Kampen theorem for groupoids (this uses category theory through the notion of a universal property/pushout) one can compute pi_1(S^1) largely algebraically just from the universal property - in fact you can go through the whole proof without mentioning a homotopy once (see tom Dieck "Algebraic Topology" section 2.7).

This might not meet your criterion exactly, as one can extract a more topological proof and relegate the category theory to a non-essential role, but this requires some more effort and is a harder proof. So I do think it still illustrates that the category theoretic approach does add something beyond just a common language.

As far as I understand, fundamental groups were defined by Poincare in 1895. And functors in category theory are a generalisation of this idea (i.e. proving something for fundamental groups and then relating this back to topological spaces). So your example sounds backwards to me.

Putting topology aside, and recognizing that 'ease' is subjective, imo Moggi's use of monads to model the denotational semantics of I/O in lazy functional languages such as Haskell is a common textbook example; the creators of Haskell had tried many solutions that did not work in practice before monads cracked it open. Even now this solution is more widely adopted than the alternatives (streaming I/O, linear I/O types, etc) and Moggi's paper remains a classic.

It's annoying that you need so much math to understand the utility of Category Theory. I learned a bunch of Category Theory before I ever saw it used in a useful way.

Grothendieck wrote modern Algebraic Geometry in the language of Category Theory. This is the first time I saw Category Theory really used in a useful way. Grothendieck's proofs of the Weil conjectures I would say is a good example of using Category Theory to solve a famous problem. Category Theory is used to define and work with etale cohomology and etale cohomology plays a fundamental role in Grothendieck's proofs of the Weil conjectures.

https://en.wikipedia.org/wiki/Weil_conjectures

Thanks, this looks very interesting!

Tragically (for pure mathematicians), there are some real world use cases.

Last years SoME3 has this entrant https://youtu.be/Njx2ed8RGis?si=-Q0TwT8LKmTC9o0R

Also Oliver lugg has a hilarious overview of the topic

https://youtu.be/yAi3XWCBkDo?si=b5MFcnfMrYyrv_Xd

> Last years SoME3 has this entrant https://youtu.be/Njx2ed8RGis?si=-Q0TwT8LKmTC9o0R

That's not an application of category theory. The important theorem here is that the fundamental group is a functor, plus computations of the fundamental group of the disc and the circle. But that's a theorem from topology, not a theorem from category theory. Category theory is merely used as a language, to give the proof a structure.

By that I don't want to say category theory is useless. But regarding the video it's neither necessary, nor an application of category theory.

> “There real use-cases”

> Proceed to link to a non-constructive proof

Why are mathematicians like this?

I found the series of Category Theory lectures by Bartosz Milewski[1] extremely helpful and approachable. It indroduces the abstract concepts of category theory while giving concrete examples of those concepts and tying some key concepts back to properties of types in programming languages.

[1]: https://www.youtube.com/playlist?list=PLbgaMIhjbmEnaH_LTkxLI...

Although not explicitly stated this application [1] of codesign for engineerring problems is actually built using category theory

[1]: https://arxiv.org/abs/1512.08055

functional programming seems to rely on it. https://en.wikipedia.org/wiki/Category_theory

I haven't dug far into CT. I'm slowly making my way through Modern Foundations of Mathematics (Richard Southwell) [1] that was posted here recently.

That said, two comments:

1) The definition of a category is just objects, arrows, and composition. If you're looking for

morefeatures, you might be disappointed. (If you've grown up with 'methods' rather than arrows, then you don't necessarily have composition.)Writing your logic with objects and arrows is just damn pleasant. If I have bytes, and an arrow from bytes to JSON, then I have JSON. If I have also have an arrow from JSON to a particular entry in the JSON, then by the property of composition, I have an arrow from bytes to that entry in the JSON.

2) The various CT structures are re-used over and over and over again, in wildly different contexts. I just read about 'logict' on another post. If you follow the link [2] and look under the 'Instances' heading, you can see it implements the usual CT suspects: Functor, Applicative, Monad, Monoid, etc. So I already know how to drive this unfamiliar technology. A few days ago I read about 'Omega' on yet another post - same deal [3]. What else? Parsers [4], Streaming IO [5], Generators in property-based-testing [6], Effect systems [7] (yet another thing I saw just the other day on another post), ACID-transactions [8] (if in-memory transactions can count as 'Durable'. You don't get stale reads in any case).

They're also widespread in other languages: Famously LINQ in C#. Java 8 Streams, Optionals, CompletableFutures, RX/Observables. However these are more monad-like or monad-inspired rather than literally implementing the Monad interface. So you still understand them and know how to drive them even if you don't know all the implementation details.

However what's lacking (compared to Haskell) is the

library code targeting monads. For example, I am always lacking something in Java Futures which should be right there: an arrow I can use to get from List<Future<T>> to Future<List<T>>. In Haskell that code ('sequence') would belong to List (in this case 'Traversable' [9]), not Future, as it can target any Monad. This saves on an n*m implementation problem: i.e. List and logict don't need to know about each other, vector and Omega don't need to know about each other, etc.Perhaps not in Java but in C# it is quite common to do

Can mix and match with other LINQ methods. There are other useful methods like Parallel.ForEachAsync, .AsParallel (Java has a similar abstraction) and IAsyncEnumerable for asynchronously "streaming back" a sequence of results which let you compose the tasks in an easy way.Maybe what you wrote is clear to other LINQ users, but as a short feedback, your point is not easily understandable for people who don't know much about it.

As in, from outside, it looks like you are in there way deep, but it might need some more work to go and fetch the (assumed) target audience.

Why does every category theory primer use this exact formulation: (.*) is just an [arrow|object] in the category of (.*)`

Every undergrad course, office hour, research paper, and manual that I've ever seen spams it.

Because arrows are functions/mappings, and everything we do in programming involves arrows, even in languages where arrows aren't used as notation.

The common formulation is that a "monad is just a monoid in the category of endofunctors", which is not saying much but with big words, and the joke lies in understanding what it's saying. Bartosz Milewski has a lecture video series on youtube that's all about explaining that joke, and I highly recommend it because it's actually a wonderful CS lecture series.

As I understand it, it's a bit of an inside joke to minimize the complexity of mathematical structure. It's frequent use is along the same lines as the frequent use of "* Considered Harmful" in CS.

A semigroup is a set M and a function m: M x M -> M that is associative, m(a,m(b,c)) = m(m(a,b),c). Writing ab for m(a, b) this can be written a(bc) = (ab)c. A monoid is a semigroup with an identity e satisfying em = m = me for every m in M. This can be used for map-reduce and pivot tables.

A (small) category is a partial monoid. The domain of m is the set of composable arrows.

As a non-mathematician, I'm confused how category theory relates back to formal systems. Is it its own formal system, or is it built on top of ZFC or type theory? If it's built on top of something else, what sort of algorithm would tell me whether the DAG that constitutes a proof from a set of axioms to a theorem involves category theory or not?

The answers to these questions are exactly the same as for group theory or graph theory or linear algebra or any other such thing.