If you have the expression 1+2*3 you have three elements with two operands. You need to choose a rule to pick one of them first.
In mathematics, the rule is "*/ then +-" and then from left to right. This means that usually first you do 2*3, then 1+.
But what if you do want to make 1+2 first?
There is another alternative, parenthesis. Those mean "do the thing inside first" so (1+2)*3 changes the precedence and now you do 1+2 first, then *3
The post is asking: with parenthesis you can increase the precedence of operations. What if you could decrease it?
Let's use «» as another operand (the blog uses parenthesis, but that makes it really confusing) this operand means "do the thing inside last". So the expression 1+«2*3» means "do 1+ first, then 2*3.
The issue is...this doesn't make sense, what the blog is really saying is to reduce the precedence of operators. Think the expression 1+2«*»3 or 1+2(*)3 and now the rule is "the parenthesized operators have one precedence less" so 1+2(*)3=(1+2)*3
Thank you. I thought I was going crazy reading the article which doesn’t connect open and close parenthesis :: higher and lower precedence :: indent and outdent :: +1 and -1 and just flip it around to get the opposing polarity.
This seems to be the best guess so far. But then I am wondering, how is
a (*) b + c
Parsed then? The precedence of '* is bumped down, but does that mean it has now strictly lower precedence of '+', or the same? In the first case the operation is parsed as
a * (b + c)
In the second case, the "left to right" rule takes over and we get
(a * b) + c
And what happens when there are more than 2 priority groups Taking C has an example, we have that '' has higher precedence than '+' which has higher precedence than '<<' [1]. So
a + b * c << d
Means
(a + (b * c)) << d
Now I could use the "decrease precedence" operator you proposed (possibly proposed by the author?) and write
a + b (*) c << d
Which then bumps down the precedence of '' to... One level lower? Which means the same level of '+', or a level lower, i.e. a new precedence level between '+' and '<<'? Or maybe this operator should end up at the bottom of the precedence rank, i.e. lower than ','?
The more I think about this, the less sense it makes...
I don't think that's even well-defined if you have arbitrary infix operators with arbitrary precedence and arbitrary associativity (think Haskell).
If $, & and @ are operators in that order of precedence, all right-associatve. Using your notation, what is:
a & << b $ c >> @ d
If $ is reduced below & but above @ then it's the same as:
((a & b) $ c) @ d
If it's reduced below both & and @ then it becomes:
(a & b) $ (c @ d)
I think conceptualizing parentheses as "increase priority" is fundamentally not the correct abstraction, it's school brain in a way. They are a way to specify an arbitrary tree of expressions, and in that sense they're complete.
Thanks indeed.
Using a simple left-to-right evaluation is the most logical solution.
You can reorder expressions to use less parentheses and make them easier to read. E.g.: Smalltalk :-).
But this requires everyone un-learning their primary school maths of e.g. multiply-before-add, so it's not popular.
Having hand-picked operator precedences complicates things further when you allow operator overloading and user defined operators. E.g. Swift has special keywords to specify precedence for these. Ick...
Thanks, writing it as 1+2(*)3 made it click for me.
Reminds me of the '$' operator in Haskell - it lowers the precedence of function application, basically being an opening parenthesis that's implicitly closed at the end of the line.
well I'll be that guy... If you're going to disturb the normal way of righting expressions, RPN or prefix notation (AKA Polish Notation) could be a better option. Both don't need parenthesis because they don't need precedence/priority rules - which are obviously a disadvantage of the infix notation.
The HP 48 famously took the bet of going against the mainstream notation. I wonder to what extent this is one of those "accidents of history".
RPN moreover simplifies parsing, as shown by the Forth language.
Prefix notation, as used by for instance Lisp, doesn't actually need parenthesis either; Lisp uses them because it allows extensions over basic operators and more generally "variadic" operators and functions (e.g. (+ 1 2 3 4)). Without this "fancy" feature, prefix notation is unambiguous as well: / + 1 2 3. [1]
On a side note, Smalltalk is one of the few languages that said "duck it", and require explicit parenthesis instead - which is IMO, not an insane approach when you see that for languages with 17 levels of priority like C, you end up putting parenthesis anyway as soon as the expression is not trivial "just to be sure" (e.g. because it mixes boolean operators, arithmetic operators and relational operators as in a & 0xF < b + 1).
Gerald Jay "Jerry" Sussman from Scheme and SICP fame (and others) would tell you there's also the prefix notation (but ofc only infix makes TFA valid: prefix or postfix makes it mostly moot). "3 x 4 x 7 x 19" only looks natural to us because we've been taught that notation as toddlers (well, ok, as young kids).
But "x 3 4 7 19" is just as valid (Minksy and having to understand someting in five different ways or you don't understand it etc.).
I read the article twice and still doesn't make sense. I tried to make sense but no matter how I slice and dice the article, the "inverse parentheses" idea seems inconsistent or ill defined.
Two comments here which explain the ill-definedness of it:
The examples at the end show that it's syntax for "parse such that this expression is not grouped". Essentially I guess this could be modelled as an operator `(_+_)` for every existing operator `_+_`, which has its binding precedence negated.
I think ungrouping make sense if you consider reverse parentheses as a syntactic construct added to the language, and not replacing the existing parentheses.
For instance, using "] e [" as the notation for reverse parentheses around expression e, the second line showing reverse parenthese simplification, third line showing the
grouping after parsing, and the fourth line using postfix notation:
A + B * (C + D) * (E + F)
=> A + B * (C + D) * (E + F)
=> (A + (B * (C + D) * (E + F)))
=> A B C D + E F + * * +
A + ] B * (C + D) [ * (E + F)
=> A + B * C + D * (E + F)
=> ((A + (B * C)) + (D * (E + F)))
=> A B C * + D E F * + +
So what ungrouping would mean is to undo the grouping done by regular parentheses.
However, this is not what is proposed later in the article.
Possibilities include reversing the priority inside the reverse parentheses, or lowering the priority wrt the rest of the expression.
I'm not sure I'm following but I think what he means is that if normal parenthesis around an addition mean this addition must precede multiplication, these anti-parenthesis around a multiplication have to make addition take place before it.
If you do both (use flipped parentheses around the operators), it makes even more sense, and makes the parsing trivial to boot: just surround the entire expression with parentheses and parse normally. For instance:
1 + 2 )( 3
Becomes
(1 + 2 )( 3)
Which is actually just what the author wants. You might even want multiple, or an arbitrary numbers of external parentheses. Say we want to give the divide the least precedence, the multiply the middle, and the add the most. We could do that like:
1 + 2 )/( 3 ))(( 4
Surround it with two sets of parens and you have:
((1 + 2 )/( 3 ))(( 4))
I haven't just proved to myself this always does what you expect, though...
Wait, no. It makes no sense to use the same characters! An "inverted" opening parens is visually identical to a "normal" closing parens. IMHO the entire proposition is inane.
Instead of ordinary brackets, one can also use the dot notation. I think it was used in Principia Mathematica or slightly later:
(A (B (C D)))
would be
A . B : C .: D
Essentially, the more dots you add, the stronger the grouping operator is binding. The precedence increases with the number of dots.
However, this is only a replacement for ordinary parentheses, not for these "reverse" ones discussed here. Maybe for reverse, one could use groups of little circles instead of dots: °, °°, °°°, etc.
I believe Peano dot notation works the other way ’round;
A . B : C :. D
would be, as I understand it, equivalent to:
((A B) C) D
The “general principle” is that a larger number of dots indicates a larger subformula.¹
What if you need to nest parentheses? Then you use more dots. A double dot (:) is like a single dot, but stronger. For example, we write ((1 + 2) × 3) + 4 as 1 + 2 . × 3 : + 4, and the double dot isolates the entire 1 + 2 . × 3 expression into a single sub-formula to which the + 4 applies.²
A dot can be thought of as a pair of parentheses, “) (”, with implicit parentheses at the beginning and end as needed.
In general the “direction” rule for interpreting a formula ‘A.B’ will be to first indicate that the center dot “works both backwards and forwards” to give first ‘A).(B’, and then the opening and closing parentheses are added to yield ‘(A).(B)’. The extra set of pairs of parentheses is then reduced to the formula (A.B).³
So perhaps one way of thinking about it is that more dots indicates more separation.
Oh you are right, more dots indicate lower operator precedence (weaker binding), not the other way round. Though the explanations you cited seem confusing to me. Apparently by non-programmers.
I may have invented something new, which I called "Precedence Demotion". I did some research regarding prior art for this exact thing but didn't find it.
LOL, I see I have a bug in the quadratic-roots example (not related to the above). The example has correct results for the given inputs because a is 1.
It's ironic: you add infix to Lisp and, wham, you make a bugeroo because of infix that would never happen in prefix, right in the documentation. Like a poetic justice punishment.
Is it the same as flipping every parenthesis to the other side of the number it's adjacent to, and then adding enough parentheses at the start and end?
For example,
(1 + 2) * (3 + 4)
becomes
1) + (2 * 3) + (4
and then we add the missing parentheses and it becomes
(1) + (2 * 3) + (4)
which seems to achieve a similar goal and is pretty unambiguous.
Working with lisp quickly made me realize how over-rated operator precedence (or even the concept of "operators") is. All such effort and early grade-school hours spent on this arbitrary short-hand instead of treating the operations themselves as functions and embracing the divine order of (more) parens.
Disclaimer: I used the /royal/ LISP--as in the 'lisp family'- my introduction, and language of choice, has been Clojure (who's qualification as a Lisp is technically contended by certain fundamentalists), but it is still quite paren-based either-way.
And I do, I bounced off it the first time long ago, and really took to it the second go around, its been my daily driver at home and work for some years now- it brings me great joy.
Regarding the first two footnotes, I’m pretty sure that originally the singular form “parenthesis” just refers to the aside inserted (by use of brackets, like this) into a sentence. Because it sounds like a plural and because of the expression “in parenthesis”, people probably mistakenly applied the word to the pair of symbols, and when that became common, started using the real plural “parentheses”. This has staying power because it’s fancy and “brackets” is way overloaded, but historically it’s probably just wrong and especially nonsensical in math and programming, where we don’t use them to insert little sidenotes.
I am in the middle of developing a parser for a new language with original representation vs. syntax issues to resolve.
Clearly, this was the worst possible time for me to come across this brain damaging essay.
I really can’t afford it! My mathematical heart can’t help taking symmetrical precedence control seriously. But my gut is experiencing an unpleasant form of vertigo.
The real question is, why does Python even have parentheses? If semantic indent is superior to braces, it ought to beat parentheses, too. The following should yield 14:
Without this Python would basically have to be Yaml-ish Lisp:
=
a
*
2
+
3
4
Let's drop the leading Yaml dashes needed to make ordered list elements. So we have an = node (assignment) which takes an a destination, and a * expression as the source operand. *'s operands are 2 and a + node, whose operands are 3 and 4.
The concept of "inverse parentheses" that unbundle operators is brilliant! The tokenizer hack (friendliness score by parenthesis depth, inspired by Python INDENT/DEDENT) + precedence climbing for infinite levels is elegant – parsing solved without convoluted recursive grammar.
kellett
I love the twist: reversing the friendly levels gives you a classic parser, and it opens up crazy experiments like whitespace weakening. Have you tested it on non-arithmetic ops (logical/bitwise) or more complex expressions like ((()))?
Based on this comment (https://news.ycombinator.com/item?id=46352389), I think I understood the missing first paragraph:
If you have the expression 1+2*3 you have three elements with two operands. You need to choose a rule to pick one of them first.
In mathematics, the rule is "*/ then +-" and then from left to right. This means that usually first you do 2*3, then 1+.
But what if you do want to make 1+2 first?
There is another alternative, parenthesis. Those mean "do the thing inside first" so (1+2)*3 changes the precedence and now you do 1+2 first, then *3
The post is asking: with parenthesis you can increase the precedence of operations. What if you could decrease it?
Let's use «» as another operand (the blog uses parenthesis, but that makes it really confusing) this operand means "do the thing inside last". So the expression 1+«2*3» means "do 1+ first, then 2*3.
The issue is...this doesn't make sense, what the blog is really saying is to reduce the precedence of operators. Think the expression 1+2«*»3 or 1+2(*)3 and now the rule is "the parenthesized operators have one precedence less" so 1+2(*)3=(1+2)*3
If we actually (as the title seems to imply) invert the parentheses, then for your example we get 1+2)*(3 .
Now all you need are the opening and closing parentheses at the start and end, and we're back to normal.
Thank you. I thought I was going crazy reading the article which doesn’t connect open and close parenthesis :: higher and lower precedence :: indent and outdent :: +1 and -1 and just flip it around to get the opposing polarity.
A real Wesley Crusher moment.
Yeah, that seems a much more robust formulation of the whole thing. Flip all parens and enclose the whole string in more parens.
that results in
which is (as GP notes), equivalent to "normal", ie what we do today: Right?This seems to be the best guess so far. But then I am wondering, how is
Parsed then? The precedence of '* is bumped down, but does that mean it has now strictly lower precedence of '+', or the same? In the first case the operation is parsed as In the second case, the "left to right" rule takes over and we get And what happens when there are more than 2 priority groups Taking C has an example, we have that '' has higher precedence than '+' which has higher precedence than '<<' [1]. So Means Now I could use the "decrease precedence" operator you proposed (possibly proposed by the author?) and write Which then bumps down the precedence of '' to... One level lower? Which means the same level of '+', or a level lower, i.e. a new precedence level between '+' and '<<'? Or maybe this operator should end up at the bottom of the precedence rank, i.e. lower than ','?The more I think about this, the less sense it makes...
[1] https://en.cppreference.com/w/c/language/operator_precedence...
I don't think that's even well-defined if you have arbitrary infix operators with arbitrary precedence and arbitrary associativity (think Haskell). If $, & and @ are operators in that order of precedence, all right-associatve. Using your notation, what is:
If $ is reduced below & but above @ then it's the same as: If it's reduced below both & and @ then it becomes: I think conceptualizing parentheses as "increase priority" is fundamentally not the correct abstraction, it's school brain in a way. They are a way to specify an arbitrary tree of expressions, and in that sense they're complete.Clearly we need left-associative and right-associative inverse parentheses.
a & )b $ c) @ d would mean ((a & b) $ c) @ d.
a & (b $ c( @ d would mean a & (b $ (c @ d)).
Combining both, a & )b $ c( @ d would mean (a & b) $ (c @ d).
;)
Thanks, this makes more sense, the blog post was written in a really confusing way.
Thanks indeed. Using a simple left-to-right evaluation is the most logical solution. You can reorder expressions to use less parentheses and make them easier to read. E.g.: Smalltalk :-). But this requires everyone un-learning their primary school maths of e.g. multiply-before-add, so it's not popular. Having hand-picked operator precedences complicates things further when you allow operator overloading and user defined operators. E.g. Swift has special keywords to specify precedence for these. Ick...
APL uses a simple right to left order of evaluation :)
Thanks, writing it as 1+2(*)3 made it click for me.
Reminds me of the '$' operator in Haskell - it lowers the precedence of function application, basically being an opening parenthesis that's implicitly closed at the end of the line.
well I'll be that guy... If you're going to disturb the normal way of righting expressions, RPN or prefix notation (AKA Polish Notation) could be a better option. Both don't need parenthesis because they don't need precedence/priority rules - which are obviously a disadvantage of the infix notation.
The HP 48 famously took the bet of going against the mainstream notation. I wonder to what extent this is one of those "accidents of history".
RPN moreover simplifies parsing, as shown by the Forth language.
Prefix notation, as used by for instance Lisp, doesn't actually need parenthesis either; Lisp uses them because it allows extensions over basic operators and more generally "variadic" operators and functions (e.g. (+ 1 2 3 4)). Without this "fancy" feature, prefix notation is unambiguous as well: / + 1 2 3. [1]
On a side note, Smalltalk is one of the few languages that said "duck it", and require explicit parenthesis instead - which is IMO, not an insane approach when you see that for languages with 17 levels of priority like C, you end up putting parenthesis anyway as soon as the expression is not trivial "just to be sure" (e.g. because it mixes boolean operators, arithmetic operators and relational operators as in a & 0xF < b + 1).
[1] https://en.wikipedia.org/wiki/Polish_notation
The HP 48 followed a couple of decades of HP calculators using RPN (hardly famous, just an evolution). HP’s first calculator used RPN.
I recommend https://www.hpmuseum.org/ for more details.
> There is another alternative, parenthesis.
Gerald Jay "Jerry" Sussman from Scheme and SICP fame (and others) would tell you there's also the prefix notation (but ofc only infix makes TFA valid: prefix or postfix makes it mostly moot). "3 x 4 x 7 x 19" only looks natural to us because we've been taught that notation as toddlers (well, ok, as young kids).
But "x 3 4 7 19" is just as valid (Minksy and having to understand someting in five different ways or you don't understand it etc.).
P.S: also your comment stinks of AI to me.
I'm all for terseness in blog writing, but I think the author forgot to add the content. I know nothing more than I did when I opened it.
And yet, this is the top entry on HN right now?! How does that happen??
I read the article twice and still doesn't make sense. I tried to make sense but no matter how I slice and dice the article, the "inverse parentheses" idea seems inconsistent or ill defined.
Two comments here which explain the ill-definedness of it:
https://news.ycombinator.com/item?id=46352560
https://news.ycombinator.com/item?id=46352697
Engagement from people trying to figure it out :-)
The blue/gold dress of HN nerd sniping
> Have you ever noticed that lots of programming languages let you use parentheses to group operands, but none use them to ungroup them?
Since this doesn't exist in practice, shouldn't the article author first explain what they mean by that?
The examples at the end show that it's syntax for "parse such that this expression is not grouped". Essentially I guess this could be modelled as an operator `(_+_)` for every existing operator `_+_`, which has its binding precedence negated.
Am I stupid if I don't get it? What is the intended end state? What does "ungroup operands" mean?
I think ungrouping make sense if you consider reverse parentheses as a syntactic construct added to the language, and not replacing the existing parentheses.
For instance, using "] e [" as the notation for reverse parentheses around expression e, the second line showing reverse parenthese simplification, third line showing the grouping after parsing, and the fourth line using postfix notation:
A + B * (C + D) * (E + F)
=> A + B * (C + D) * (E + F)
=> (A + (B * (C + D) * (E + F)))
=> A B C D + E F + * * +
A + ] B * (C + D) [ * (E + F)
=> A + B * C + D * (E + F)
=> ((A + (B * C)) + (D * (E + F)))
=> A B C * + D E F * + +
So what ungrouping would mean is to undo the grouping done by regular parentheses.
However, this is not what is proposed later in the article.
Possibilities include reversing the priority inside the reverse parentheses, or lowering the priority wrt the rest of the expression.
I'm not sure I'm following but I think what he means is that if normal parenthesis around an addition mean this addition must precede multiplication, these anti-parenthesis around a multiplication have to make addition take place before it.
I stumbled on this:
https://lobste.rs/s/qoqfwz/inverse_parentheses#c_n5z77w
which should provide the answer.
I was hoping the parentheses themselves would be flipped. Like this:
> 1 + )2 * 3(
(1 + 2) * 3
I think surrounding the operand would make slightly more sense, as in 1 + 2 (*) 3 as if it's a "delayed form" of the operation that it represents.
If you do both (use flipped parentheses around the operators), it makes even more sense, and makes the parsing trivial to boot: just surround the entire expression with parentheses and parse normally. For instance: 1 + 2 )( 3 Becomes (1 + 2 )( 3) Which is actually just what the author wants. You might even want multiple, or an arbitrary numbers of external parentheses. Say we want to give the divide the least precedence, the multiply the middle, and the add the most. We could do that like: 1 + 2 )/( 3 ))(( 4 Surround it with two sets of parens and you have: ((1 + 2 )/( 3 ))(( 4)) I haven't just proved to myself this always does what you expect, though...
Same.
That said if you try to use that with ordinary parentheses usage it would get ambiguous as soon as you nest them
Wait, no. It makes no sense to use the same characters! An "inverted" opening parens is visually identical to a "normal" closing parens. IMHO the entire proposition is inane.
Echoing the sentiment that I'm not really sure what I'm meant to be looking at here. A motivating example at the start would have helped, I think.
Slightly unrelated:
Instead of ordinary brackets, one can also use the dot notation. I think it was used in Principia Mathematica or slightly later:
would be Essentially, the more dots you add, the stronger the grouping operator is binding. The precedence increases with the number of dots.However, this is only a replacement for ordinary parentheses, not for these "reverse" ones discussed here. Maybe for reverse, one could use groups of little circles instead of dots: °, °°, °°°, etc.
I believe Peano dot notation works the other way ’round;
would be, as I understand it, equivalent to: The “general principle” is that a larger number of dots indicates a larger subformula.¹What if you need to nest parentheses? Then you use more dots. A double dot (:) is like a single dot, but stronger. For example, we write ((1 + 2) × 3) + 4 as 1 + 2 . × 3 : + 4, and the double dot isolates the entire 1 + 2 . × 3 expression into a single sub-formula to which the + 4 applies.²
A dot can be thought of as a pair of parentheses, “) (”, with implicit parentheses at the beginning and end as needed.
In general the “direction” rule for interpreting a formula ‘A.B’ will be to first indicate that the center dot “works both backwards and forwards” to give first ‘A).(B’, and then the opening and closing parentheses are added to yield ‘(A).(B)’. The extra set of pairs of parentheses is then reduced to the formula (A.B).³
So perhaps one way of thinking about it is that more dots indicates more separation.
¹ https://plato.stanford.edu/entries/pm-notation/dots.html
² https://blog.plover.com/math/PM.html
³ https://plato.stanford.edu/entries/pm-notation/dots.html
See also https://plato.stanford.edu/entries/pm-notation/index.html and https://muse.jhu.edu/article/904086.
Oh you are right, more dots indicate lower operator precedence (weaker binding), not the other way round. Though the explanations you cited seem confusing to me. Apparently by non-programmers.
could this be the origin of lisp and ML family list notation ?
I've played with operator precedence parsing earlier this year to produce an infix implementation for TXR Lisp.
Documentation entry point: https://www.nongnu.org/txr/txr-manpage.html#N-BEB6083E
I may have invented something new, which I called "Precedence Demotion". I did some research regarding prior art for this exact thing but didn't find it.
https://www.nongnu.org/txr/txr-manpage.html#N-89023B87
LOL, I see I have a bug in the quadratic-roots example (not related to the above). The example has correct results for the given inputs because a is 1.
It's ironic: you add infix to Lisp and, wham, you make a bugeroo because of infix that would never happen in prefix, right in the documentation. Like a poetic justice punishment.
Is it the same as flipping every parenthesis to the other side of the number it's adjacent to, and then adding enough parentheses at the start and end?
For example,
becomes and then we add the missing parentheses and it becomes which seems to achieve a similar goal and is pretty unambiguous.Working with lisp quickly made me realize how over-rated operator precedence (or even the concept of "operators") is. All such effort and early grade-school hours spent on this arbitrary short-hand instead of treating the operations themselves as functions and embracing the divine order of (more) parens.
Do you really like Lisp?
Disclaimer: I used the /royal/ LISP--as in the 'lisp family'- my introduction, and language of choice, has been Clojure (who's qualification as a Lisp is technically contended by certain fundamentalists), but it is still quite paren-based either-way.
And I do, I bounced off it the first time long ago, and really took to it the second go around, its been my daily driver at home and work for some years now- it brings me great joy.
Regarding the first two footnotes, I’m pretty sure that originally the singular form “parenthesis” just refers to the aside inserted (by use of brackets, like this) into a sentence. Because it sounds like a plural and because of the expression “in parenthesis”, people probably mistakenly applied the word to the pair of symbols, and when that became common, started using the real plural “parentheses”. This has staying power because it’s fancy and “brackets” is way overloaded, but historically it’s probably just wrong and especially nonsensical in math and programming, where we don’t use them to insert little sidenotes.
I am in the middle of developing a parser for a new language with original representation vs. syntax issues to resolve.
Clearly, this was the worst possible time for me to come across this brain damaging essay.
I really can’t afford it! My mathematical heart can’t help taking symmetrical precedence control seriously. But my gut is experiencing an unpleasant form of vertigo.
Prompt: Suggest a topic for a short blog post that will nerd-snipe as many readers as possible while making no actual sense at all.
System: How about “Inverse Parentheses”? We can write the entire article without ever defining what it means. Nerds will be unable to resist.
The footnotes are top-tier ADHD. Particularly loved the footnote on a footnote, 10/10.
The real question is, why does Python even have parentheses? If semantic indent is superior to braces, it ought to beat parentheses, too. The following should yield 14:
Also, don't forget that python has;
Without this Python would basically have to be Yaml-ish Lisp: Let's drop the leading Yaml dashes needed to make ordered list elements. So we have an = node (assignment) which takes an a destination, and a * expression as the source operand. *'s operands are 2 and a + node, whose operands are 3 and 4.I don't understand
Ha! I was expecting/wondering something about the semantics of )( parenthesis (which I have no idea what it could be, but... why not?)
Parenthesis used to decrease precedence? Everything outside of the parenthesis will be done before what is in the parenthesis?
Where do stars live? Thats what I wonder.
I think reading this let me experience the feeling a Bene Gesserit has when they hear about a preborn.
Using Brave on MacOS, I cannot scroll the page to see the entire text. On Firefox, it scrolls fine.
Same in Safari. It has something to do with the
in the main CSS file: https://kellett.im/theme/main.cssCannot scroll on Safari on macOS, either. What also doesn't work is making the font smaller / larger.
Splendid. Someone found a way to break Browser Scrolling. (Firefox 115.16 for Win7)
Well done.
The singular for parenthesis is paren.
Opened this excitedly, thinking I was going to get something related to S-expressions/Lisp, was disappointed...
The core idea: normally, parentheses strengthen grouping:
1 + (2 * 3) forces 2 * 3 to happen first.
Without them, operator precedence decides. The post asks a deliberately strange question:
What if parentheses did the opposite — instead of grouping things tighter, they made them bind less tightly?
The concept of "inverse parentheses" that unbundle operators is brilliant! The tokenizer hack (friendliness score by parenthesis depth, inspired by Python INDENT/DEDENT) + precedence climbing for infinite levels is elegant – parsing solved without convoluted recursive grammar. kellett
I love the twist: reversing the friendly levels gives you a classic parser, and it opens up crazy experiments like whitespace weakening. Have you tested it on non-arithmetic ops (logical/bitwise) or more complex expressions like ((()))?
It's not just brilliant, it's earth-shattering.
llm generated comment
I am sure your comment is almost always wrong whenever you use it.