There’s a pattern emerging in how people work with AI coding assistants, and it took me a while to notice it. The conversation usually goes like this: someone discovers that their LLM can do something useful, they craft a prompt that works, and then they paste that prompt into every new session. Forever. Some people maintain text files full of these prompts. Others memorize them. A few particularly organized folks dump them into system instructions and hope for the best.
This works, sort of. But it’s fragile in ways that become obvious once you’ve done it enough times. Prompts drift as you tweak them. Context gets lost between sessions. The knowledge that made your AI assistant actually useful for your specific workflow lives scattered across chat histories, notes, and your own memory. Addy Osmani described this well when he wrote about turning “fragile repeated prompting into something durable and reusable.” That’s the problem skills solve.
The filesystem as interface
A skill is just a folder. Inside it, there’s typically a SKILL.md file containing instructions the agent should follow when the skill is relevant, plus whatever supporting materials make sense — scripts, templates, reference docs, example files. The agent discovers these skills by scanning designated directories, reads the metadata to understand what each skill does, and loads the full instructions only when your request matches. Think of it as progressive disclosure for AI context: lightweight descriptions up front, detailed knowledge on demand.
What makes this different from a massive system prompt or a carefully curated set of custom instructions? Modularity, mostly. You can have dozens of skills installed without bloating every conversation with irrelevant context. You can share skills between agents, version them in git, publish them to registries like ClawdHub. When someone figures out a better way to do something, they update the skill and everyone benefits. This is the Unix philosophy applied to AI capabilities: small, composable, shareable units that do one thing well.
The Spring AI team has been experimenting with this pattern on the Java side, and their framing is useful. They describe skills as “modular folders of instructions, scripts, and resources that AI agents can discover and load on demand.” The key phrase is “on demand” — the agent isn’t loaded down with everything it might ever need, it reaches for specific knowledge when the situation calls for it.
Skills versus tools
If you’ve followed Anthropic’s Model Context Protocol, you might wonder how skills differ from MCP tools. They’re complementary, actually. MCP provides a standardized way for agents to connect to external services — databases, APIs, file systems, browsers. It’s the plumbing. Skills are more like the instruction manual that tells the agent how to approach certain types of work, what to watch out for, which tools to prefer, what the expected output should look like.
A skill might tell an agent: when someone asks you to analyze a PDF, use this specific approach, check these common failure modes, format results this way. An MCP server might provide the actual capability to read that PDF. The skill encodes workflow knowledge and domain expertise; the tool provides the raw capability. You need both, and they work best together.
The interesting thing about skills is how they change what “configuring” an AI assistant means. Instead of writing increasingly elaborate system prompts or trying to anticipate every possible request, you build up a library of specialized knowledge that the agent draws from as needed. It’s less like programming and more like training — not in the machine learning sense, but in the way you’d train a capable new team member by pointing them at documentation and worked examples.
This matters because the bottleneck in AI-assisted work isn’t usually the model’s raw capability. It’s context. The model doesn’t know your codebase conventions, your deployment process, your team’s preferences, the weird edge cases in your domain. Skills are a structured way to give it that knowledge without repeating yourself endlessly or hoping it remembers from three conversations ago. Whether this pattern survives or gets absorbed into something else, the underlying insight feels right: packaging knowledge beats repeating instructions.