The distribution problem is harder than it looks because it's actually a composition problem in disguise. A single skill is trivially shareable — zip it, gist it, whatever.
But in practice you end up with skills that depend on other skills, or a skill that assumes specific instructions are already loaded, conflicting skills, versioning and supply chain issues - and suddenly you need dependency resolution.
I've built a package-manager approach for this (APM - github.com/microsoft/apm) and the thing that surprised me most was how quickly even small teams end up with config sprawl - and how much a manifest that travels with the project helps.
The "too small for a repo" thing is real, but one pattern that works is having a monorepo per dev team or org with all skills and building jointly over there.
Yeah, I've built my own skill-package manager as well btw! Then it clicked and I hyperfocused for a whole week and vibecoded a skill marketplace haha. Because the selling of a skill seems to sound like a new idea.
going to write a show HN about that maybe
Thinking about what manzanarama mentioned regarding checking skills into a repo, that's exactly how I look at it. To me, these skills are just specialized configuration. We've been versioning configuration for decades because it's critical to reproducibility and maintainability.
The "cognitive load" problem latand6 raised for reviewing every skill is real. That's where you integrate security and quality gates – treat these skills like any other software artifact. You wouldn't manually review every line of every dependency, so automate the validation here too.
Most of this discourse feels like some kind of religious ritual built on the foundation of authority bias. Where is the evidence that skills improves performance over any other methodology outside of the fact of its nascent popularity?
I do agree with Jacques Ellul in Technological Society that technique precedes science, and that's certainly the case with LLMs; however, this whole industry waves off rigorous validation in favor of personal anecdotes ("it feels more productive to me!" "they didn't study after Opus 4.5 was released").
The difference I'm noticing is in that with a proper skill you can skip the process of LLM wandering about and trying to guess how to interact with an API or else.
so they basically just save you time, even if they are 50% efficient of what it COULD be
Yeah, that's the default way to approach it, but the cognitive load is still a problem. Manually reviewing every single skill I might need just for one task is tedious. GitHub stars won't give you any useful signal. I actually started building a product that solves this.
The distribution problem is harder than it looks because it's actually a composition problem in disguise. A single skill is trivially shareable — zip it, gist it, whatever.
But in practice you end up with skills that depend on other skills, or a skill that assumes specific instructions are already loaded, conflicting skills, versioning and supply chain issues - and suddenly you need dependency resolution.
I've built a package-manager approach for this (APM - github.com/microsoft/apm) and the thing that surprised me most was how quickly even small teams end up with config sprawl - and how much a manifest that travels with the project helps.
The "too small for a repo" thing is real, but one pattern that works is having a monorepo per dev team or org with all skills and building jointly over there.
Yeah, I've built my own skill-package manager as well btw! Then it clicked and I hyperfocused for a whole week and vibecoded a skill marketplace haha. Because the selling of a skill seems to sound like a new idea. going to write a show HN about that maybe
Thinking about what manzanarama mentioned regarding checking skills into a repo, that's exactly how I look at it. To me, these skills are just specialized configuration. We've been versioning configuration for decades because it's critical to reproducibility and maintainability.
The "cognitive load" problem latand6 raised for reviewing every skill is real. That's where you integrate security and quality gates – treat these skills like any other software artifact. You wouldn't manually review every line of every dependency, so automate the validation here too.
Most of this discourse feels like some kind of religious ritual built on the foundation of authority bias. Where is the evidence that skills improves performance over any other methodology outside of the fact of its nascent popularity?
I do agree with Jacques Ellul in Technological Society that technique precedes science, and that's certainly the case with LLMs; however, this whole industry waves off rigorous validation in favor of personal anecdotes ("it feels more productive to me!" "they didn't study after Opus 4.5 was released").
The difference I'm noticing is in that with a proper skill you can skip the process of LLM wandering about and trying to guess how to interact with an API or else.
so they basically just save you time, even if they are 50% efficient of what it COULD be
Will skills replace MCP servers eventually?
How would it replace something that is not used widespread yet? I think both skills and MCP already have similar levels of adoption.
It's more - a SKILL can contain MCP :)
What do you think about checking the skills directly into the repo where they are useful?
Yeah, that's the default way to approach it, but the cognitive load is still a problem. Manually reviewing every single skill I might need just for one task is tedious. GitHub stars won't give you any useful signal. I actually started building a product that solves this.
How are skills different from Having the agent create todo-the-ticket.md (with me) that contain the work scope and steps.
That’s what I do for each ticket.
Skills are repeatable workflows, that could be extracted as unit of work I guess