One topic of llms not doing well with UI and visuals.
I've been trying a new approach I call CLI first.
Essentially instead of trying to get llm to generate a fully functioning UI app.
You focus on building a local CLI tool first.
It can directly call the CLI tool and you can iterate on the design of whatever you are building quickly.
You can get it to walk through the flows, and journeys using the CLI prototype and iterate on it quickly.
Your commands structure will very roughly map to your resources or pages.
Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)
You can get it to build the remote storage, then the apis, finally the frontend.
All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.
I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.
Roughly: machine code → assembly → C → high-level languages → frameworks → visual tools → LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.
One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
Uh not really. I am already having Claude read and then one-shot proprietary ERP code written in vintage closed source language OOP oriented BASIC with sparse documentation.... just needed to feed it in the millions of lines of code i have and it works.
> but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
Neither RAG nor loading the docs into the context window would produce any effective results. Not even including the grammar files and just few examples in the training set would help. To get any usable results you still need many many usage examples.
In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.
There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
> every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
How will it "learn" anything if the only available training data is on a single website?
LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.
Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
I think the only hope is that AGI arises and picks up where humanity left off. Otherwise I think this is the long dark teatime of human engineering of all sorts.
> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.
Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.
In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.
That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.
At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.
I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...
Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.
I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce
Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!
I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info
The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.
Admittedly I only skimmed this, but I found it interesting that they came to the conclusion that Claude is really bad at (thing they know how to do, and therefore judge ) and really good at (thing they don't know how to do or judge).
I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."
> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.
This is such an interesting statement to me in the context of leftpad.
Also, neither over the wire dependency issues or code injection issues (the two major criticisms) are solved by using an llm to produce the code. Talk about shifting complexity. It would be better if every LSP had a general utility library generator built in.
This is not exactly novel. In the 2000s, someone made a fully functioning Perl 6 runtime in a very short amount of time (a month, IIRC) using Haskell. The various Lisps/Schemes have always given you the ability to implement specialized languages even more quickly and ergonomically than Haskell (IMHO).
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.
At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.
One topic of llms not doing well with UI and visuals.
I've been trying a new approach I call CLI first.
Essentially instead of trying to get llm to generate a fully functioning UI app.
You focus on building a local CLI tool first.
It can directly call the CLI tool and you can iterate on the design of whatever you are building quickly.
You can get it to walk through the flows, and journeys using the CLI prototype and iterate on it quickly.
Your commands structure will very roughly map to your resources or pages.
Once you are satisfied with the capability of the cli tool. (Which may actually be enough, or just local ui)
You can get it to build the remote storage, then the apis, finally the frontend.
All the while you can still tell it to use the cli to test through the flows and journeys, against real tasks that you have, and iterate on it.
I did recently for pulling some of my personal financial data and reporting it. And now I'm doing this for another TTS automation I've wanted for a while.
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
Programming languages are after all the interface that a human uses to give instructions to a computer. If you’re not writing or reading it, the language, by definition doesn’t matter.
In the 90s people hoped Unified Modeling Language diagrams would generate software automatically. That mostly didn’t happen. But large language models might actually be the realization of that old dream. Instead of formal diagrams, we describe the system in natural language and the model produces the code. It reminds me of the old debates around visual web tools vs hand-written HTML. There seems to be a recurring pattern: every step up the abstraction ladder creates tension between people who prefer the new layer and those who want to stay closer to the underlying mechanics.
Roughly: machine code → assembly → C → high-level languages → frameworks → visual tools → LLM-assisted coding. Most of those transitions were controversial at the time, but in retrospect they mostly expanded the toolbox rather than replacing the lower layers.
One workflow I’ve found useful with LLMs is to treat them more like a code generator after the design phase. I first define the constraints, objects, actors, and flows of the system, then use structured prompts to generate or refine pieces of the implementation.
The constraints enforced in the language still matter. A language which offers certain correctness guarantees may still be the most efficient way to build a particular piece of software even when it's a machine writing the code.
There may actually be more value in creating specialized languages now, not less. Most new languages historically go nowhere because convincing human programmers to spend the time it would take to learn them is difficult, but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
> every AI coding bot will learn your new language
If there are millions of lines on github in your language.
Otherwise the 'teaching AI to write your language' part will occupy so much context and make it far less efficient that just using typescript.
Uh not really. I am already having Claude read and then one-shot proprietary ERP code written in vintage closed source language OOP oriented BASIC with sparse documentation.... just needed to feed it in the millions of lines of code i have and it works.
"i haven't been able to find much" != "there isn't much on the entire internet fed into them"
> but every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
That's assuming that your new, very unknown language gets slurped up in the next training session which seems unlikely. Couldn't you use RAG or have an LLM read the docs for your language?
Agreed - unpopular languages and packages have pretty shaky outcomes with code generation, even ones that have been around since before 2023.
Neither RAG nor loading the docs into the context window would produce any effective results. Not even including the grammar files and just few examples in the training set would help. To get any usable results you still need many many usage examples.
In addition, I think token efficiency will continue to be a problem. So you could imagine very terse programming languages that are roughly readable for a human, but optimized to be read by LLMs.
That's an interesting idea. But IMO the real 'token saver' isn't in the language keywords but it's in the naming of things like variables, classes, etc.
There are languages that are already pretty sparse with keywords. e.g in Go you can write 'func main() string', no need to define that it's public, or static etc. So combining a less verbose language with 'codegolfing' the variables might be enough.
I think I remember seeing research right here on HN that terse languages don't actually help all that much
> every AI coding bot will learn your new language as a matter of course after its next update includes the contents of your website.
How will it "learn" anything if the only available training data is on a single website?
LLMs struggle with following instructions when their training set is massive. The idea that they will be able to produce working software from just a language spec and a few examples is delusional. It's a fundamental misunderstanding of how these tools work. They don't understand anything. They generate patterns based on probabilities and fine tuning. Without massive amounts of data to skew the output towards a potentially correct result they're not much more useful than a lookup table.
Like everything generated by LLMs though, it is built on the shoulders of giants - what will happen to software if no one is creating new programming languages anymore? Does that matter?
I think the only hope is that AGI arises and picks up where humanity left off. Otherwise I think this is the long dark teatime of human engineering of all sorts.
> Impressive. As a practical matter, one wonders what th point would be in creating a new programming languages if the programmer no longer has to write or read code.
I'm working on a language as well (hoping to debut by end of month), but the premise of the language is that it's designed like so:
1) It maximizes local reasoning and minimizes global complexity
2) It makes the vast majority of bugs / illegal states impossible to represent
3) It makes writing correct, concurrent code as maximally expressive as possible (where LLMs excel)
4) It maximizes optionality for performance increases (it's always just flipping option switches - mostly at the class and function input level, occassionaly at the instruction level)
The idea is that it should be as easy as possible for an LLM to write it (especially convert other languages to), and as easy as possible for you to understand it, while being almost as fast as absolutely perfect C code, and by virtue of the design of the language - at the human review phase you have minimal concerns of hidden gotcha bugs.
How does a programming language prevent the vast majority of bugs? I feel like we would all be using that language!
I don’t agree with the idea that programming languages don’t have an impact of an LLM to write code. If anything, I imagine that, all else being equal, a language where the compiler enforces multiple levels of correctness would help the AI get to a goal faster.
A good example of this is Rust. Rust is by default memory safe when compared to say, C, at the expense of you having to be deliberate in managing memory. With LLMs this equation changes significantly because that harder/more verbose code is being written by the LLM, so it won't slow you down nearly as much. Even better, the LLM can interact with the compiler if something is not exactly as it should.
On a different but related note, it's almost the same as pairing django or rails with an LLM. The framework allows you to trust that things like authentication and a passable code organization are being correctly handled.
That is why Typescript is the main one used by most people vibe coding, The LLMs do like to work around the type engine in it sometimes, but strong typing and linting can help a ton in it.
Saves tokens. The main reason though is to manage performance for what techniques get used for specific use cases. In their case it seems to be about expressiveness in Bash.
> If you’re not writing or reading it, the language, by definition doesn’t matter.
By what definition? It still matters if I write my app in Rust vs say Python because the Rust version still have better performance characteristics.
In principle (and we hope in practice) the person is still responsible for the consequences of running the code and so it remains important they can read and understand what has been generated.
I've been wondering if a diffusion model could just generate software as binary that could be fed directly into memory.
Yeah, what could go wrong.
I've been working on a large codebase that was already significant before LLM-assisted programming, leveraging code I’d written over a decade ago. Since integrating Claude and Codex, the system has evolved and grown massively. Realistically, there’s a lot in there now that I simply couldn't have built in a standard human lifetime without them.
That said, the core value of the software wouldn't exist without a human at the helm. It requires someone to expend the energy to guide it, explore the problem space, and weave hundreds of micro-plans into a coherent, usable system. It's a symbiotic relationship, but the ownership is clear. It’s like building a house: I could build one with a butter knife given enough time, but I'd rather use power tools. The tools don't own the house.
At this point, LLMs aren't going to autonomously architect a 400+ table schema, network 100+ services together, and build the UI/UX/CLI to interface with it all. Maybe we'll get there one day, but right now, building software at this scale still requires us to drive. I believe the author owns the language.
Claude Code built a programming language using you
I'd say these times will be filled with a lot of tailored-to-you "self"-made software, but the question is, are we increasing amount of information in the world? I heard that claude and chatgpt are getting good at mathematical proofs which give really something to our knowledge, but all other things are neutral to entropy, if not decreasing. Strange time to live in, strange valuations and devaluations...
Not to discount your experience, but I dont understand what's interesting about this. You could always build a programming language yourself, given enough time. Programming languages' constructs are well represented in the training dataset. I want someone to build something uniquely novel that's not actually in the dataset and i'll be impressed by CC.
I think we're going to see a lot more of this. I've done a similar thing, hosting a toy language on haskell, and it was remarkably easy to get something useful and usable, in basically a weekend. If you keep the surface area small enough you can now make a fully fledged, compiled language for basically every single purpose you'd like, and coevolve the language, the code, and the compiler
Yeah it's a rewarding project. Getting a language that kinda works is surprisingly accessible. Though we must be mindful that this is still the "draw some circles" pane. Producing the rest of the rest of the famous owl is, as always, the hard bit.
We did this in 4th year comp-sci.
AI written code with a human writted blog post, that's a big step up.
That said, it's a lot of words to say not a lot of things. Still a cool post, though!
> with a human writted blog post
I believe we're at a point where it's not possible to accurately decide whether text is completely written by human, by computer, or something in between.
We're definitely not at that point.
If this blog post is unedited LLM output, the blog owner needs to sell whatever model, setup and/or prompt he used for a million dollars, since it's clearly far beyond the state-of-the-art in terms of natural-sounding tone.
You can make an LLM sound very natural if you simply ask for it and provide enough text in the tone you’d like it to reproduce. Otherwise, it’s obvious that an LLM with no additional context will try to stick to the tone the company aligned it to produce
Agree. I've been yearning for more insightful posts and there's just not alot of them out there these days
Using LLMs to invent new programming languages is a mystery to me. Who or what is going to use this? Presumably not the author.
AI generate some feedback, then just move onto the next project, and repeat.
Next you can let Claude play your video games for you as well. Gads we are a voyeuristic society aren’t we.
Why not let Claude do our dating? I'm surprised someone hasn't thought of this: AI dating, let the AI find and qualify a date for you, and match with the person who meets you, for you!
I suspect this is going to be an iteration of the Simpsons meme soon, but...
Black Mirror did it first https://en.wikipedia.org/wiki/Hang_the_DJ
Here's Claude playing Detroit: Become Human https://www.youtube.com/watch?v=Mcr7G1Cuzwk
I am kind of doing that now. I put Kimi K2.5 into a Ralph Loop to make a Screeps.com AI. So far its been awful at it. If you want to track its progress, I have its dashboard at https://balsa.info
That was step #1.
Step #2 is: get real people to use it!
The AI age is calling for a language that is append-only, so we can write in a literate programming style and mix prompts with AI output, in a linear way.
That’s git commits.
That's arguably not very ergonomic, which is probably the biggest requirement for a programming language.
or css
A REPL + immutability?
I rolled a fair dice using ChatGPT.
Admittedly I only skimmed this, but I found it interesting that they came to the conclusion that Claude is really bad at (thing they know how to do, and therefore judge ) and really good at (thing they don't know how to do or judge).
I mean, they may be right but there is also a big opportunity for this being Gell-Mann amnesia : "The phenomenon of a person trusting newspapers for topics which that person is not knowledgeable about, despite recognizing the newspaper as being extremely inaccurate on certain topics which that person is knowledgeable about."
I had the exact same thoughts reading it.
Nope. You didn't write it. You plagiarized it. AI is bad
> I’ve also been able to radically reduce my dependency on third-party libraries in my JavaScript and Python projects. I often use LLMs to generate small utility functions that previously required pulling in dependencies from NPM or PyPI.
This is such an interesting statement to me in the context of leftpad.
I'm imagining the amount of energy required to power the datacenter so that we can produce isEven() utility methods.
Also, neither over the wire dependency issues or code injection issues (the two major criticisms) are solved by using an llm to produce the code. Talk about shifting complexity. It would be better if every LSP had a general utility library generator built in.
we need a caching layer
Now anyone can be a Larry Wall, and I'm not sure that's a good thing.
This is not exactly novel. In the 2000s, someone made a fully functioning Perl 6 runtime in a very short amount of time (a month, IIRC) using Haskell. The various Lisps/Schemes have always given you the ability to implement specialized languages even more quickly and ergonomically than Haskell (IMHO).
This latest fever for LLMs simply confirms that people would rather do _anything_ other than program in a (not necessarily purely) functional language that has meta-programming facilities. I personally blame functional fixedness (psychological concept). In my experience, when someone learns to program in a particular paradigm or language, they are rarely able or willing to migrate to a different one (I know many people who refused to code in anything that did not look and feel like Java, until forced to by their growling bellies). The AI/LLM companies are basically (and perhaps unintentionally) treating that mental inertia as a business opportunity (which, in one way or another, it was for many decades and still is -- and will probably continue to be well into a post-AGI future).
> While working on Cutlet, though, I allowed Claude to generate every single line of code. I didn’t even read any of the code. Instead, I built guardrails to make sure it worked correctly (more on that later).
The "more on that later" was unit tests (also generated by Claude Code) and sample inputs and outputs (which is basically just unit tests by a different name).
This is... horrifically bad. It's stupidly easy to make unit tests pass with broken code, and even more stupidly easy when the test is also broken.
These "guardrails" are made of silly putty.
Wait. You built a new language, that there's thus no training data for.
Who the hell is going to use it then? You certainly won't, because you're dependent on AI.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
it's a valid question and one that everyone should be asking, unless ofcourse it's for fun which is what I believe this is.
It isn’t shallow.
Who’s going to use it?
With clear examples in their context they don't need training data.
I recently tried using Claude to generate a lexer and parser for a language i was designing. As part of its first attempt, this was the code to parse a float literal:
Admittedly, I do have a very idiosyncratic definition of floating-point literal for my language (I have a variety of syntaxes for NaNs with payloads), but... that is not a usable definition of float literal.At the end of the day, I threw out all of the code the AI generated and wrote it myself, because the AI struggled to produce code that was functional to spec, much less code that would allow me to easily extend it to other kinds of future operators that I knew I would need in the future.