I used to be impressed with these corporate techblogs and their internal proprietary systems, but not so much anymore. Because code is a liability.
I would rather use off-the-shelf open source stuff with long history of maintenance and improvement, rather than reinvent the cron/celery/airflow/whatever, because code is a liability. Somebody needs to maintain it, fix bugs, add new features. Unless I get +1 grade promotion and salary/rsu bump, ofc.
People need to realize that code is a liability, anything that is not the business critical stuff that earns/makes $$$ for the company is a distraction and resource sink.
Isn't this exactly WHY this blog post exists? They are open sourcing this software so that they don't have to maintain it all internally anymore.
They had a need that an existing "off-the-shelf open source" project didn't solve, so they created this an are now turning it into an "off-the-shelf open source" project so they can keep using it without having to maintain it entirely themselves.
How are these open source tools supposed to be created in the first place? This is the process, someone has to do it
This is an extreme point of view, that is tightly connected to the MBA-driven min-maxing of everything under the sun.
I am glad that there are folks who aren't afraid to code new systems and champion new ideas. Even in the corporate sense, mediocre risk averse solutions will only take you so far. The most profitable companies tend to be quite daring in their tech.
Code is not a liability. Code is what makes a company move its gears.
Off-the-shelf open source stuff is often the product of big companies open sourcing internal tools though. Airflow, which you name check, is a great example of this. Temporal is another example in the space. Someone has to be dumb enough to build new stuff
> with long history of maintenance and improvement,
That is a huge load bearing statement.
Do you plan on any contributions back to the community yourself?
Build vs. buy is always an important conversation but claiming that the 'buy'-side path has perfectly 0 maintenance and reliability costs reeks of naivety.
> anything that is not the business critical stuff
That's an important qualifier. For skilled teams in performance-critical domains, the inflection point where any outside code becomes a low-quality/low-control liability is not that far.
100%. Very few times are these systems built as robustly as external folks who earn a profit on building robustness. Best example of course being Stripe. But I see this from everything from visual snapshot testing tools to custom CI workflows. The good thing is you can always rely on competitive market dynamics to price the off the shelf solution down to a reasonable margin above maintenance costs.
> open source stuff with long history of maintenance and improvement
improvement and maintenance is continent on usage, and having been used at Netflix, this project is in a better place to have already faced whatever bug you are worried about (and let's be real, 99% of applications wont ever get the luck to exercise code paths sophisticated enough to find bugs Netflix has not found already).
You might be unnecessarily projecting here. You don't have evidence to support that open sourcing this might have been for any other reason than it is simply good for the community to have.
This is a naive view, other people’s code is even more of a liability. Look at crowdstrike and opensource infiltrations. Using opensource software doesn’t magically grant you security nor stability.
Code that you own and intimately understand is less of a liability than some 3rd party dependency (paid or free). Stitching together a patchwork of dependencies is not likely the optimal result. The more aligned your codebase is with the problem you're trying to solve the better, and if functionality is core to your business better to own than borrow or rent.
I very much disagree with this take-- and the more I've experienced throughout my career the more I'm sure of it.
Companies spend an IMMENSE amount of time and effort adapting sometimes subpar off the shelf solutions to fit their infra and pay an ongoing tax w/ increasing tech debt trying to support them. Often something bespoke and smaller + more tailored would unlock significantly more productivity if the investment is made consciously.
Any code that is written has both assets and liabilities. But to claim it is a distraction and resource sink is a very, very bad take. Every decision to build something in-house needs to be done thoughtfully and deliberately.
3rd parties are also a liability. Pick your poison. Trust in unknown individuals, trust in megacorps, or trust your own people. Choosing wisely is why people get paid the big bucks.
I wonder how many iterations we will need before engineers are happy with a workflow solution. Netflix had multiple solutions before Maestro, such as metaflow. Uber built multiple solutions too. Amazon had at least a dozen internal workflow engines. It's quite curious why engineers are so keen on building their own workflow engines.
Update: I just find it really interesting that many individuals in many companies like to build workflow engines. This is a not deriding comment towards anyone or Netflix in particular. To me, such observation is worth some friendly chitchat.
The issue is that "workflow orchestration" is a broad problem space. Companies need to address a lot of disparate issues and so any solution ends up being a giant product w/ a lot of associated functionality and heavily opinionated as it grows into a big monolith. This is why almost universally folks are never happy.
In reality there are five main concerns:
1. Resource scheduling-- "I have a job or collection of jobs to run... allocate them to the machines I have"
2. Dependency solving-- If my jobs have dependencies on each other, perform the topological sort so I can dispatch things to my resource scheduler
3. API/DSL for creating jobs and workflows. I want to define a DAG... sometimes static, sometimes on the fly.
4. Cron-like functionality. I want to be able to run things on a schedule or ad-hoc.
5. Domain awareness-- If doing ETL I want my DAGs to be data aware... if doing ML/AI workflows then I want to be able to surface info about what I'm actually doing with them
No one solution does all these things cleanly. So companies end up building or hacking around off the shelf stuff to deal with the downsides of existing solutions. Hence it's a perpetual cycle of everyone being unhappy.
I don't think that you can just spin up a startup to deliver this as a "solution". This needs to be solved with an open source ecosystem of good pluggable modular components.
We rolled our own workflow engine and it almost crashed one of our unrelated projects for having so many bugs and being so inflexible.
I’m starting to think workflow engines are somewhat of a design smell.
It’s enticing to think you can build this reusable thing once and use it for a ton of different workflows, but besides requiring more than one asynchronous step, these workflows have almost nothing in common.
Different data, different APIs, different feedback required from users or other systems to continue.
It’s likely because we haven’t yet found a workflow engine/orchestrator thats capable of handling diverse tasks while still being easy to understand and operate.
It’s really easy to build a custom workflow engine and optimize it for specific use cases. I think we haven’t yet seen a convergence simply because this tool hasn’t yet been built.
Consider the recent rise of tools that quickly dominated their fields: Terraform (IaC), Kubernetes (distributed compute). Both systems are hella complex, but they solve hard problems. Generic workflow engines are complex to understand and difficult to operate and offer a middling experience so many folks don’t even bother.
It inherently asks for a custom implementation because it's almost like workflows are just how you'd have to code and run everything anyway. Conceptually: why wouldn't we want to reconnect to any work we are currently in progress of, just like in a videogame where if we lose connection for a splitsecond, we want to be able to keep going where we left off? So therefore we must save the current step persistently and make sure that we can resume work and never lose it. Workflow engines also do no magic: They still just run code and if it fails in a place that we didn't manually checkpoint (=by making it into a separate task/workflow/function/action/transaction that is persistable) then we still lose data so.. at that point why not just try doing it this way everywhere whether it's running in a "workflow engine" or not. Before "workflow engines" we already had db transactions but those were mostly for our benefit so we don't mess up the db with partial inserts. Although so far what I've seen in open source workflow engines is that they don't let you work with user input easily, it's sad how they all start a new thread and then just block the thread while it waits for the user to send something. This is obviously not how you'd code a crud operation. In my opinion this is a huge drawback of current workflow engines. If this was solved, we should literally do everything as a workflow I think. Every form submission from the user could offer to let the user continue where he left off and we saved all his data so "he can reconnect to his game" (to revive the videogame metaphor I started with)
I wrote my own because I wanted to learn about DAG and toposort and had some ideas about what nodes and edges in the workflow meant (IE, does data flow over edges? Or do the edges just represent the sequence in which things run? Is a node a bundle of code, does it run continuously, or run then exit?). I almost ended up with reflow, which is a functional-programming approach based on python, similar to nextflow, but I found that the whole functional approach to be extremely challenging to reason about and debug.
Often times what happens is the workflow engine is tailored to a specific problem and then other teams discover the engine and want to use it for their projects, but often need some additional feature, sometimes which completely up-ends the mental model of the engine itself.
We all have different use-cases. We also have a workflow engine at work but that's because we wanted immediate execution. From submit to execute time can be 100 ms on our system, which makes it also work well for short jobs. Usually, the task coordinator overhead is greater than that on these things.
These things tend to be fairly complex and require lots of integration with various services to get working. I think it's a little more organic to start building something simple and end up progressively adding more than implementing one from scratch (unless there are people around with experience)
Founder of https://windmill.dev here which share many similarities with Maestro.
> Maestro is a general-purpose, horizontally scalable workflow orchestrator designed to manage large-scale workflows such as data pipelines and machine learning model training pipelines. It oversees the entire lifecycle of a workflow, from start to finish, including retries, queuing, task distribution to compute engines, etc.. Users can package their business logic in various formats such as Docker images, notebooks, bash script, SQL, Python, and more. Unlike traditional workflow orchestrators that only support Directed Acyclic Graphs (DAGs), Maestro supports both acyclic and cyclic workflows and also includes multiple reusable patterns, including foreach loops, subworkflow, and conditional branch, etc.
You could replace Maestro with Windmill here and it would be precisely correct. Their rollup is what we call the openflow state.
Main differences I see:
- Windmill is written in Rust instead of Java.
- Maestro relies on CockroachDB for state and us Postgresql for everything (state but also queue). I can see why they would use CockroachDB, we had to rollout our own sharding algorithms to make Windmill horizontally scale on our very large scale customer instances
- Maestro is Apache 2.0 vs Windmill AGPL which is less friendly
- It's backed by Netflix so infinite money but although we are profitable, we are a much smaller company
- Maestro doesn't have extensive docs about self-hosting on k8s or docker-compose and either there is no UI to build stuff, or the UI is not yet well surfaced in their documentation
But overall, pretty cool stuff to open-source, will keep an eye on it and benchmark it asap
Why do I need to "sync" with windmill? Why is there an IDE built into windmill? Why is this so convoluted? It's like it's starting with the goal of lock-in before even developing a good product or finding market fit.
Thanks for the great comparison! While Meastro is Apache licensed, if it depends on CockroachDB, Cokroach itslef isn't even Open Source, so that isn't great. I would rather have an AGPL codebase than a non open source dependency. Of course overtime some one could add alternative DB support.
I'm a bit confused about what is going on here: This project appears to use Netflix/conductor [0]. But you go to that repo, you see it has been archived, with a message saying it is replaced by Netflix's internal non-OSS version, and by unmentioned community forks – by which I assume they mean Orkes Conductor [1]. But this isn't using Orkes Conductor, it looks like it is using the discontinued Netflix version `com.netflix.conductor:conductor-core:2.31.5` [2] – and an outdated version of it too.
Anyone here use Activebatch? To me it is the best software I wish had an equivalent for non enterprise users. I have tried and tried to use other "competitors" but Activebatch's simplicity of just attaching a simple MS SQL DB, installing the Windows GUI and execution agent is just click, click, click and now you have a robust GUI based automation environment where you don't have to use code...or if you want, go ahead and use code in any language if you want...but you don't have to.
Airflow may be robust but it is hidden behind a complexity fence that prevents most from seeing whatever its true capability may be. The same goes for other "open source" competitors.
Why can't someone just develop a robust DB backed GUI first system?
I have tried online services as well, they pale in comparison. I guess the cost of maintaining extensions is what kills simpler paid offerings?
Its a complete shame that ActiveBatch is walled off behind a stupid enterprise sales model. This has prevented this wonderful piece of software from being picked up by the wider community. Its like a hidden secret. :/
Advice: don’t rely on any tool open-sourced by Netflix. They have a long history of dropping support for things after they’ve announced them. Someone got a checkmark on their promotion packet by getting this blog post and code sharing out the door, but don’t build your business on a solution like this.
The issue we hit with Temporal - again and again - is that it's very under-documented, and it's something you install at the core of your business, yet it's really hard to understand what is going on, through all the layers and through the very obtuse documentation.
Maestro has... no documentation? OK Temporal wins by default.
isn’t Maestro an alternative to Airflow, not Temporal? Temporal isn’t a workflow orchestrator. There’s some overlap on the internals but they’re different designs for different use cases.
Rent a pickup truck to transport general items
Win Pickup Company Limited (Thailand)
We provide pickup trucks and professional porters throughout Bangkok and throughout Thailand. We provide moving services for moving houses, dormitories, warehouses, offices, general product delivery, refrigerated product transport, frozen product delivery, etc.
We are a cold storage truck service center in Bangkok and throughout Thailand. Transportation service for temperature controlled goods, refrigerated goods, frozen goods. With a refrigerated pickup truck/large four-wheeled refrigerated truck Good service and confidence in professionalism in cold storage transportation. And there is a pickup truck rental service, refrigerated car rental service. Available 24 hours a day.
This is a really great-looking project. I know I've considered building (a probably worse) version of exactly this on almost every mixed ML + Data Engineering project I've ever worked on.
I'm building something in the space (orchestra) so here's my take:
Folks making stuff open source and building in the open is obviously brilliant, but when it comes to "orchestrators" (as this is, and identifies) there is already so much that has been before (Airflow and so on) it's quite hard to see how this actually adds anything to the space other than another option nobody is ever going to use in a commercial setting.
Is this meaningfully different from Conductor (which they archived a while back)? Browsing through the code I see quite a few similarities. Plus the use of JSON as the workflow definition language.
Interesting. My team recently built a thing for managing long running, multi-machine, restartable, cascading batch jobs in an unrelated vehicle. Had no idea it was a category.
The name Maestro has already been used for a workflow orchestrator which I worked on back in 2016. That maestro is SQL-centric and infers dependencies automatically by simply examining the SQL. It's written in Go and is BigQuery-specific (but could be easily adjusted to use any SQL-based system).
They did use Temporal at Netflix, they gave a couple presentations 2 years ago. I think this is very much not-Temporal because it relies on a DSL instead of workflow as code.
I don't know if it's a scale-thing, I'm not a workflow expert but this seems more in line with the map-reduce of yore, as in you get some big fat steps and you coordinate them, although you could have coarse-grained activities in Temporal workflows.
I'd be curious to see what the tradeoffs are between the two and if they still have usages for Temporal. Maybe Maestro is better for less technical people? Latency? Scale?
I'm sure this is very nice, but the article reads as if written by AI. The first thing I'd want to see is an example workflow (both code and configuration) in a realistic use case. Instead, there's a lot of "powerful and flexible" language, but the example workflow doesn't come until halfway down, and then it's just foobar
slightly off topic, but there is dire need for a scientific "workflow manager" built to FAANG engineering standards attuned for the needs of academia (ie primarily designed to facilitate execution of DAGs on clusters). The airflows of the world have complex unnecessary features and require extensive kitbashing to plug into slurm and the academic side of things is a huge mess. Snakemake comes the closest but suffers from massive feature creep, a bizarre specification DSL (superset of python) and blurred resource requirement abstraction boundaries.
Academia better to learn k8s and one of the k8s-native workflow orchestrators. This is as close to FAANG grade and open source as they can get, and arguably a bit better than this repo
Anyone have a recommendation for a workflow orchestrator for single server deployments? Looking at running a project at home and for certain pieces think it would be easiest to orchestrate with a tool like Maestro or Airflow but they’re basically set up to run in clusters with admins to manage them.
Windmill is pretty lightweight and easy to deploy. https://www.windmill.dev/ you can configure it to have a single worker on the same server as the ui and database.
It says one of the big differentiators with 'traditional workflow orchestrators' is that is supports cyclic graphs. But BPMN (and the orchestrators using it) also supports loops.
Whats the difference of this and enqueue work into a queue then waiting for a job to pick it up at a scheduled time? Not saying build a Kafka cluster to serve this but most cloud providers have queuing tools.
Putting work in a queue is only the start. Most organizations start there and gradually write ad hoc logic as they discover problems like dependencies, retries, & scheduling.
Dependencies: what can be done in parallel and what must be done in sequence? For example, three tasks get pushed in the queue and only after all three finish a fourth task must be run.
Retries: The concept is simple. The details are killer. For example, ifa task fails, how long should the delay between retries be? Too short and you create a retry storm. Forget to add some jitter and you get thundering hoards all retrying at the same time.
Scheduling: Because cron is good enough, until it isn't.
A good workflow solution provides battle tested versions of all of the above. Better yet, a great workflow solution makes it easier to keep business logic separate from plumbing so that it's easier to reason about and test.
I used to be impressed with these corporate techblogs and their internal proprietary systems, but not so much anymore. Because code is a liability.
I would rather use off-the-shelf open source stuff with long history of maintenance and improvement, rather than reinvent the cron/celery/airflow/whatever, because code is a liability. Somebody needs to maintain it, fix bugs, add new features. Unless I get +1 grade promotion and salary/rsu bump, ofc.
People need to realize that code is a liability, anything that is not the business critical stuff that earns/makes $$$ for the company is a distraction and resource sink.
Isn't this exactly WHY this blog post exists? They are open sourcing this software so that they don't have to maintain it all internally anymore.
They had a need that an existing "off-the-shelf open source" project didn't solve, so they created this an are now turning it into an "off-the-shelf open source" project so they can keep using it without having to maintain it entirely themselves.
How are these open source tools supposed to be created in the first place? This is the process, someone has to do it
> People need to realize that code is a liability
This is an extreme point of view, that is tightly connected to the MBA-driven min-maxing of everything under the sun.
I am glad that there are folks who aren't afraid to code new systems and champion new ideas. Even in the corporate sense, mediocre risk averse solutions will only take you so far. The most profitable companies tend to be quite daring in their tech.
Code is not a liability. Code is what makes a company move its gears.
Off-the-shelf open source stuff is often the product of big companies open sourcing internal tools though. Airflow, which you name check, is a great example of this. Temporal is another example in the space. Someone has to be dumb enough to build new stuff
> with long history of maintenance and improvement,
That is a huge load bearing statement.
Do you plan on any contributions back to the community yourself?
Build vs. buy is always an important conversation but claiming that the 'buy'-side path has perfectly 0 maintenance and reliability costs reeks of naivety.
> anything that is not the business critical stuff That's an important qualifier. For skilled teams in performance-critical domains, the inflection point where any outside code becomes a low-quality/low-control liability is not that far.
100%. Very few times are these systems built as robustly as external folks who earn a profit on building robustness. Best example of course being Stripe. But I see this from everything from visual snapshot testing tools to custom CI workflows. The good thing is you can always rely on competitive market dynamics to price the off the shelf solution down to a reasonable margin above maintenance costs.
I am confused by this comment:
> open source stuff with long history of maintenance and improvement
improvement and maintenance is continent on usage, and having been used at Netflix, this project is in a better place to have already faced whatever bug you are worried about (and let's be real, 99% of applications wont ever get the luck to exercise code paths sophisticated enough to find bugs Netflix has not found already).
You might be unnecessarily projecting here. You don't have evidence to support that open sourcing this might have been for any other reason than it is simply good for the community to have.
This is a naive view, other people’s code is even more of a liability. Look at crowdstrike and opensource infiltrations. Using opensource software doesn’t magically grant you security nor stability.
> People need to realize that code is a liability
Code that you own and intimately understand is less of a liability than some 3rd party dependency (paid or free). Stitching together a patchwork of dependencies is not likely the optimal result. The more aligned your codebase is with the problem you're trying to solve the better, and if functionality is core to your business better to own than borrow or rent.
I very much disagree with this take-- and the more I've experienced throughout my career the more I'm sure of it.
Companies spend an IMMENSE amount of time and effort adapting sometimes subpar off the shelf solutions to fit their infra and pay an ongoing tax w/ increasing tech debt trying to support them. Often something bespoke and smaller + more tailored would unlock significantly more productivity if the investment is made consciously.
Any code that is written has both assets and liabilities. But to claim it is a distraction and resource sink is a very, very bad take. Every decision to build something in-house needs to be done thoughtfully and deliberately.
This sounds like the beginning of a sales pitch.
Aren’t most of those things developed in house at tech giants and later open sourced?
> code is a liability
3rd parties are also a liability. Pick your poison. Trust in unknown individuals, trust in megacorps, or trust your own people. Choosing wisely is why people get paid the big bucks.
> I would rather use off-the-shelf open source stuff with long history of maintenance and improvement
Where do you think this "off-the-shelf open source stuff" comes from exactly?
Off the shelf come with its own set of burdens, not always sunshine rainbows and loli's
I was going to say this. I never mess with random libraries like this, always so much pain.
I wonder how many iterations we will need before engineers are happy with a workflow solution. Netflix had multiple solutions before Maestro, such as metaflow. Uber built multiple solutions too. Amazon had at least a dozen internal workflow engines. It's quite curious why engineers are so keen on building their own workflow engines.
Update: I just find it really interesting that many individuals in many companies like to build workflow engines. This is a not deriding comment towards anyone or Netflix in particular. To me, such observation is worth some friendly chitchat.
The issue is that "workflow orchestration" is a broad problem space. Companies need to address a lot of disparate issues and so any solution ends up being a giant product w/ a lot of associated functionality and heavily opinionated as it grows into a big monolith. This is why almost universally folks are never happy.
In reality there are five main concerns: 1. Resource scheduling-- "I have a job or collection of jobs to run... allocate them to the machines I have" 2. Dependency solving-- If my jobs have dependencies on each other, perform the topological sort so I can dispatch things to my resource scheduler 3. API/DSL for creating jobs and workflows. I want to define a DAG... sometimes static, sometimes on the fly. 4. Cron-like functionality. I want to be able to run things on a schedule or ad-hoc. 5. Domain awareness-- If doing ETL I want my DAGs to be data aware... if doing ML/AI workflows then I want to be able to surface info about what I'm actually doing with them
No one solution does all these things cleanly. So companies end up building or hacking around off the shelf stuff to deal with the downsides of existing solutions. Hence it's a perpetual cycle of everyone being unhappy.
I don't think that you can just spin up a startup to deliver this as a "solution". This needs to be solved with an open source ecosystem of good pluggable modular components.
Metaflow sits on top of Maestro, and neither replaces the other
> ...Users can use Metaflow library to create workflows in Maestro to execute DAGs consisting of arbitrary Python code. from https://netflixtechblog.com/orchestrating-data-ml-workflows-...
The orchestration section in this article (https://netflixtechblog.com/supporting-diverse-ml-systems-at...) goes into detail on how Metaflow interplays with Maestro (and Airflow, Argo Workflows & Step Functions)
We rolled our own workflow engine and it almost crashed one of our unrelated projects for having so many bugs and being so inflexible.
I’m starting to think workflow engines are somewhat of a design smell.
It’s enticing to think you can build this reusable thing once and use it for a ton of different workflows, but besides requiring more than one asynchronous step, these workflows have almost nothing in common.
Different data, different APIs, different feedback required from users or other systems to continue.
It’s likely because we haven’t yet found a workflow engine/orchestrator thats capable of handling diverse tasks while still being easy to understand and operate.
It’s really easy to build a custom workflow engine and optimize it for specific use cases. I think we haven’t yet seen a convergence simply because this tool hasn’t yet been built.
Consider the recent rise of tools that quickly dominated their fields: Terraform (IaC), Kubernetes (distributed compute). Both systems are hella complex, but they solve hard problems. Generic workflow engines are complex to understand and difficult to operate and offer a middling experience so many folks don’t even bother.
Naming things, cache invalidation, and workflow engines? :)
https://github.com/meirwah/awesome-workflow-engines
It inherently asks for a custom implementation because it's almost like workflows are just how you'd have to code and run everything anyway. Conceptually: why wouldn't we want to reconnect to any work we are currently in progress of, just like in a videogame where if we lose connection for a splitsecond, we want to be able to keep going where we left off? So therefore we must save the current step persistently and make sure that we can resume work and never lose it. Workflow engines also do no magic: They still just run code and if it fails in a place that we didn't manually checkpoint (=by making it into a separate task/workflow/function/action/transaction that is persistable) then we still lose data so.. at that point why not just try doing it this way everywhere whether it's running in a "workflow engine" or not. Before "workflow engines" we already had db transactions but those were mostly for our benefit so we don't mess up the db with partial inserts. Although so far what I've seen in open source workflow engines is that they don't let you work with user input easily, it's sad how they all start a new thread and then just block the thread while it waits for the user to send something. This is obviously not how you'd code a crud operation. In my opinion this is a huge drawback of current workflow engines. If this was solved, we should literally do everything as a workflow I think. Every form submission from the user could offer to let the user continue where he left off and we saved all his data so "he can reconnect to his game" (to revive the videogame metaphor I started with)
I wrote my own because I wanted to learn about DAG and toposort and had some ideas about what nodes and edges in the workflow meant (IE, does data flow over edges? Or do the edges just represent the sequence in which things run? Is a node a bundle of code, does it run continuously, or run then exit?). I almost ended up with reflow, which is a functional-programming approach based on python, similar to nextflow, but I found that the whole functional approach to be extremely challenging to reason about and debug.
Often times what happens is the workflow engine is tailored to a specific problem and then other teams discover the engine and want to use it for their projects, but often need some additional feature, sometimes which completely up-ends the mental model of the engine itself.
We all have different use-cases. We also have a workflow engine at work but that's because we wanted immediate execution. From submit to execute time can be 100 ms on our system, which makes it also work well for short jobs. Usually, the task coordinator overhead is greater than that on these things.
These things tend to be fairly complex and require lots of integration with various services to get working. I think it's a little more organic to start building something simple and end up progressively adding more than implementing one from scratch (unless there are people around with experience)
Its because Netflix pretends to be a tech company to get the high market cap.
So they hire tons of engineers who have nothing to do but rearchitecture the mess their microservices have created.
Then there are others who create observability and test harnesses for all of that.
When Pornhub and other porn sites can deliver orders of magnitude more data across the world with much simpler systems, you know it's all bullshit.
> why engineers are so keen on building their own workflow engines
Because all the existing ones suck.
(We built our own tiny one two. We need tight integration with systemd jobs and cgroups, and existing solutions don't do that.)
Founder of https://windmill.dev here which share many similarities with Maestro.
> Maestro is a general-purpose, horizontally scalable workflow orchestrator designed to manage large-scale workflows such as data pipelines and machine learning model training pipelines. It oversees the entire lifecycle of a workflow, from start to finish, including retries, queuing, task distribution to compute engines, etc.. Users can package their business logic in various formats such as Docker images, notebooks, bash script, SQL, Python, and more. Unlike traditional workflow orchestrators that only support Directed Acyclic Graphs (DAGs), Maestro supports both acyclic and cyclic workflows and also includes multiple reusable patterns, including foreach loops, subworkflow, and conditional branch, etc.
You could replace Maestro with Windmill here and it would be precisely correct. Their rollup is what we call the openflow state.
Main differences I see:
- Windmill is written in Rust instead of Java.
- Maestro relies on CockroachDB for state and us Postgresql for everything (state but also queue). I can see why they would use CockroachDB, we had to rollout our own sharding algorithms to make Windmill horizontally scale on our very large scale customer instances
- Maestro is Apache 2.0 vs Windmill AGPL which is less friendly
- It's backed by Netflix so infinite money but although we are profitable, we are a much smaller company
- Maestro doesn't have extensive docs about self-hosting on k8s or docker-compose and either there is no UI to build stuff, or the UI is not yet well surfaced in their documentation
But overall, pretty cool stuff to open-source, will keep an eye on it and benchmark it asap
Anyone considering windmill needs to look at this first:
https://www.windmill.dev/docs/advanced/local_development.
Why do I need to "sync" with windmill? Why is there an IDE built into windmill? Why is this so convoluted? It's like it's starting with the goal of lock-in before even developing a good product or finding market fit.
Thanks for the great comparison! While Meastro is Apache licensed, if it depends on CockroachDB, Cokroach itslef isn't even Open Source, so that isn't great. I would rather have an AGPL codebase than a non open source dependency. Of course overtime some one could add alternative DB support.
Been using windmill for a few months and so far it's rock solid keep it up!
I'm a bit confused about what is going on here: This project appears to use Netflix/conductor [0]. But you go to that repo, you see it has been archived, with a message saying it is replaced by Netflix's internal non-OSS version, and by unmentioned community forks – by which I assume they mean Orkes Conductor [1]. But this isn't using Orkes Conductor, it looks like it is using the discontinued Netflix version `com.netflix.conductor:conductor-core:2.31.5` [2] – and an outdated version of it too.
[0] https://github.com/Netflix/conductor
[1] https://github.com/conductor-oss/conductor
[2] https://github.com/Netflix/maestro/blob/e8bee3f1625d3f31d84d...
Yes Netflix abandoned Conductor long time ago [0]. The other repo is built and managed by Orkes after Netflix abandoned it.
[0] https://techcrunch.com/2023/12/13/orkes-forks-conductor-as-n...
Anyone here use Activebatch? To me it is the best software I wish had an equivalent for non enterprise users. I have tried and tried to use other "competitors" but Activebatch's simplicity of just attaching a simple MS SQL DB, installing the Windows GUI and execution agent is just click, click, click and now you have a robust GUI based automation environment where you don't have to use code...or if you want, go ahead and use code in any language if you want...but you don't have to.
Airflow may be robust but it is hidden behind a complexity fence that prevents most from seeing whatever its true capability may be. The same goes for other "open source" competitors.
Why can't someone just develop a robust DB backed GUI first system?
I have tried online services as well, they pale in comparison. I guess the cost of maintaining extensions is what kills simpler paid offerings?
Its a complete shame that ActiveBatch is walled off behind a stupid enterprise sales model. This has prevented this wonderful piece of software from being picked up by the wider community. Its like a hidden secret. :/
Advice: don’t rely on any tool open-sourced by Netflix. They have a long history of dropping support for things after they’ve announced them. Someone got a checkmark on their promotion packet by getting this blog post and code sharing out the door, but don’t build your business on a solution like this.
why would one consider this over something more established such as Temporal, also I see Maestro is written in Java vs Temporal's Go
Temporal's go is... something. They used to use Java (I think), then they switched to Go, and the Go is very Java-like.
Or maybe I just don't know Fx.
https://github.com/temporalio/temporal/blob/main/service/mat...
The issue we hit with Temporal - again and again - is that it's very under-documented, and it's something you install at the core of your business, yet it's really hard to understand what is going on, through all the layers and through the very obtuse documentation.
Maestro has... no documentation? OK Temporal wins by default.
Netflix also uses temporal: https://temporal.io/in-use/netflix
That's also my question.
isn’t Maestro an alternative to Airflow, not Temporal? Temporal isn’t a workflow orchestrator. There’s some overlap on the internals but they’re different designs for different use cases.
Didn't they rewrite some of Temporal's core in rust?
Rent a pickup truck to transport general items Win Pickup Company Limited (Thailand)
We provide pickup trucks and professional porters throughout Bangkok and throughout Thailand. We provide moving services for moving houses, dormitories, warehouses, offices, general product delivery, refrigerated product transport, frozen product delivery, etc. We are a cold storage truck service center in Bangkok and throughout Thailand. Transportation service for temperature controlled goods, refrigerated goods, frozen goods. With a refrigerated pickup truck/large four-wheeled refrigerated truck Good service and confidence in professionalism in cold storage transportation. And there is a pickup truck rental service, refrigerated car rental service. Available 24 hours a day.
Contact rent a pickup truck to transport general items Win Pickup Company Limited (Thailand) Phone number 0896106900 More details https://www.winpickup.com/ https://www.winpickup.com/17214884/ย้ายคอนโด https://www.winpickup.com/17132230/ย้ายบ้าน https://www.winpickup.com/15585958/ย้ายหอพัก
This is a really great-looking project. I know I've considered building (a probably worse) version of exactly this on almost every mixed ML + Data Engineering project I've ever worked on.
I'm looking forward to testing it out.
I'm building something in the space (orchestra) so here's my take:
Folks making stuff open source and building in the open is obviously brilliant, but when it comes to "orchestrators" (as this is, and identifies) there is already so much that has been before (Airflow and so on) it's quite hard to see how this actually adds anything to the space other than another option nobody is ever going to use in a commercial setting.
Shameless plug: https://getorchestra.io
Is this meaningfully different from Conductor (which they archived a while back)? Browsing through the code I see quite a few similarities. Plus the use of JSON as the workflow definition language.
Conductor was moved here: https://github.com/conductor-oss/conductor Maestro uses conductor as its core.
https://github.com/Netflix/maestro/blob/main/maestro-engine/...
https://netflixtechblog.com/orchestrating-data-ml-workflows-...
So will this serve as a stand-in replacement for something like Airflow?
I'm also missing comparisons to other existing tools like airflow, dagster, mlflow...
Yeah, also curious if this is meant as a replacement for Airflow.
Interesting. My team recently built a thing for managing long running, multi-machine, restartable, cascading batch jobs in an unrelated vehicle. Had no idea it was a category.
The name Maestro has already been used for a workflow orchestrator which I worked on back in 2016. That maestro is SQL-centric and infers dependencies automatically by simply examining the SQL. It's written in Go and is BigQuery-specific (but could be easily adjusted to use any SQL-based system).
https://github.com/voxmedia/maestro/
With all due respect, there are so many projects. They don’t care about clashing with a repo that has 12 stars and 14 commits.
Seems like they re-engineered Temporal: https://temporal.io/
They did use Temporal at Netflix, they gave a couple presentations 2 years ago. I think this is very much not-Temporal because it relies on a DSL instead of workflow as code.
I don't know if it's a scale-thing, I'm not a workflow expert but this seems more in line with the map-reduce of yore, as in you get some big fat steps and you coordinate them, although you could have coarse-grained activities in Temporal workflows.
I'd be curious to see what the tradeoffs are between the two and if they still have usages for Temporal. Maybe Maestro is better for less technical people? Latency? Scale?
I'm sure this is very nice, but the article reads as if written by AI. The first thing I'd want to see is an example workflow (both code and configuration) in a realistic use case. Instead, there's a lot of "powerful and flexible" language, but the example workflow doesn't come until halfway down, and then it's just foobar
Very nice, Netflix has a reputation of making great OSS products. I wonder where does this stand with Conductor.
Maestro is a domain specific implementation for ML and data pipelines that uses Conductor as its core
https://netflixtechblog.com/orchestrating-data-ml-workflows-...
https://github.com/Netflix/maestro/blob/main/maestro-engine/...
Don't see many Java projects being posted on HN.
We only upvote Go or Rust projects here ;)
slightly off topic, but there is dire need for a scientific "workflow manager" built to FAANG engineering standards attuned for the needs of academia (ie primarily designed to facilitate execution of DAGs on clusters). The airflows of the world have complex unnecessary features and require extensive kitbashing to plug into slurm and the academic side of things is a huge mess. Snakemake comes the closest but suffers from massive feature creep, a bizarre specification DSL (superset of python) and blurred resource requirement abstraction boundaries.
Academia better to learn k8s and one of the k8s-native workflow orchestrators. This is as close to FAANG grade and open source as they can get, and arguably a bit better than this repo
What about Nextflow?
So they abandoned https://github.com/Netflix/conductor to create Maestro
Looks a bit like Argo Workflows combined with Argo Events. Makes sense to have so many projects and products converge around the same endstate.
Anyone have a recommendation for a workflow orchestrator for single server deployments? Looking at running a project at home and for certain pieces think it would be easiest to orchestrate with a tool like Maestro or Airflow but they’re basically set up to run in clusters with admins to manage them.
Windmill is pretty lightweight and easy to deploy. https://www.windmill.dev/ you can configure it to have a single worker on the same server as the ui and database.
I'd recommend Kestra[1] since it can be run on a single node
[1] https://kestra.io/
For Python tasks you can check Prefect, among others..
It says one of the big differentiators with 'traditional workflow orchestrators' is that is supports cyclic graphs. But BPMN (and the orchestrators using it) also supports loops.
Interesting how complete this is. It's almost as comprehensive as prefect.io
This is a critical software infrastructure I have been promoting for years yet almost everyone thinks they don't need it.
Dagster is a better alternative, because of its asset first philosophy. Task based workflows are still available if you really need it.
Whats the difference of this and enqueue work into a queue then waiting for a job to pick it up at a scheduled time? Not saying build a Kafka cluster to serve this but most cloud providers have queuing tools.
Putting work in a queue is only the start. Most organizations start there and gradually write ad hoc logic as they discover problems like dependencies, retries, & scheduling.
Dependencies: what can be done in parallel and what must be done in sequence? For example, three tasks get pushed in the queue and only after all three finish a fourth task must be run.
Retries: The concept is simple. The details are killer. For example, ifa task fails, how long should the delay between retries be? Too short and you create a retry storm. Forget to add some jitter and you get thundering hoards all retrying at the same time.
Scheduling: Because cron is good enough, until it isn't.
A good workflow solution provides battle tested versions of all of the above. Better yet, a great workflow solution makes it easier to keep business logic separate from plumbing so that it's easier to reason about and test.
workflows typically involve chains of jobs with state transitions, waits, triggers, error handling etc
a lot more than just e.g. celery jobs
A workflow manager implements a Choreography based saga pattern https://microservices.io/patterns/data/saga.html
What is a workflow in this context?
Hooolaa
Eduardo
great job on open sourcing!
[flagged]