Author: @BlazingKevin_, Researcher at asset management firm Blockbooster
In 2025, the AI Agent field will be at a critical juncture, transitioning from "technical concept" to "engineering implementation." In this process, Anthropic's exploration of capability encapsulation has unexpectedly facilitated an industry-wide paradigm shift.

On October 16, 2025, Anthropic officially launched Agent Skill . Initially, the official positioning of this feature was extremely restrained—it was merely regarded as an auxiliary module to improve Claude's performance in specific vertical tasks (such as complex code logic and specific data analysis).
However, market and developer feedback far exceeded expectations. It was quickly discovered that this "modular capability" design demonstrated extremely high decoupling and flexibility in actual engineering. It not only reduced the redundancy of Prompt tuning but also significantly improved the stability of Agents performing specific tasks. This experience quickly triggered a chain reaction in the developer community. Within a short period, leading productivity tools and integrated development environments (IDEs), including VS Code, Codex, and Cursor, followed suit, successively completing underlying support for the Agent Skill architecture.
Faced with the spontaneous expansion of the ecosystem, Anthropic recognized the underlying universal value of this mechanism. On December 18, 2025, Anthropic made a landmark decision: officially releasing Agent Skill as an open standard .
Following this, on January 29, 2026, the official detailed user manual for Skill was released, completely breaking down the technical barriers to cross-platform and cross-product reuse at the protocol level. This series of actions signifies that Agent Skill has completely shed its label as a "Claude exclusive accessory" and has officially evolved into a universal underlying design pattern in the entire AI Agent field.
At this point, a question arises: what core pain points at the underlying engineering level does Agent Skill, which has been embraced by major companies and core developers, actually solve? And what are the essential differences and collaborative relationships between it and the currently popular MCP ?
To thoroughly clarify these issues and ultimately apply them to the practical construction of investment research in the crypto industry , this article will explore the following topics in a step-by-step manner:
What exactly is an Agent Skill? In the simplest terms, it's essentially a "personalized instruction manual" for a large model that you can refer to at any time .
When using AI in our daily lives, we often encounter a pain point: every time we start a new conversation, we have to rewrite the long request. Agent Skill was created to solve this problem.
For a practical example: Suppose you want to create an "intelligent customer service" agent. You can clearly write down the rules in your Skill: "When encountering a user complaint, the first step must be to calm them down, and you must never make any promises of compensation." Another example: If you frequently need to create "meeting summaries," you can directly define a template in your Skill: "Each time you output a meeting summary, you must strictly follow the format of the three sections: 'Attendees,' 'Core Issues,' and 'Final Decisions.'"
With this "instruction manual," you won't need to repeat that long string of instructions in every conversation. When the large model receives a task, it will automatically consult the corresponding Skill and immediately know which standard to use to perform the task.
Of course, "documentation" is just a simplified analogy for easier understanding. In reality, Agent Skill can do far more than simply provide formatting guidelines; we will break down its killer advanced features in detail in later chapters. But in the initial stages, you can think of it as an efficient task instruction manual.
Next, we'll use the familiar scenario of a "meeting summary" to see how to create an Agent Skill. The entire process doesn't require complex programming knowledge.
Based on the current settings of mainstream tools (such as Claude Code), we need to find (or create) a folder called .claude/skill in the user directory of the computer. This is the "headquarters" where all skills are stored.
First, create a new folder in this directory. Name this folder the same as your Agent Skill . Second, create a text file named skill.md in the folder you just created.
Every Agent Skill must have a skill.md file. Its purpose is to tell the AI: who I am, what I can do, and how to work according to my instructions. Opening this file, you'll find it clearly divided into two parts:
At the very beginning of the file, usually enclosed by two --- , is the area where only two core attributes are written: name and description .
name : This is the name of the skill, and it must be exactly the same as the name of the folder outside.description : This is an extremely important part. It's responsible for explaining the specific purpose of this skill to the larger model. The AI continuously scans all skill descriptions in the background to determine which skill should be used to answer the user's question. Therefore, writing an accurate and comprehensive description is a prerequisite for ensuring that your skill can be accurately activated by the AI.The rest of the text below the hyphen is the specific rules written for the AI. Officially, this is called the "instructions." This is where you get creative; you need to describe in detail the logic the model needs to follow. For example, in the meeting summary example, you could specify here in plain language: "The list of attendees, the topics discussed, and the final decisions must be extracted."
Once you've completed these steps, a simple yet highly practical Agent Skill will be created.
However, a truly useful skill often begins with meticulous upfront design. Clearly defining your goals, scope, and success criteria before typing your first line will make your build process much more efficient.
The first step in building a skill is not to think about "what tricks I can get AI to do", but to ask yourself: " What repetitive problems do I need to solve in my daily work? " It is recommended to first define 2 to 3 specific scenarios that this skill should cover.
Secondly, define the criteria for success. How do you know if the skill you've written is good or not? Before you start, set several measurable standards for it. For example, a quantitative standard could be "whether the processing speed has increased," while a qualitative standard could be "whether the meeting decisions it extracts are accurate enough and without omissions every time."
Having learned about the basics of Agent Skill, we can't help but ask: how exactly does this "documentation" work in actual operation?
If you've recently used a product like Manus AI, you've likely experienced this scenario: when you pose a specific question, the AI doesn't immediately launch into a long-winded explanation or seem to have hallucinations. Instead, it astutely recognizes that "this matter falls under the jurisdiction of a specific Agent Skill." Then, a prompt appears on the screen asking if you allow the invocation of that Skill.
Once you click "Agree," the AI behaves like a completely different person, perfectly outputting results according to the preset rules.
Behind this seemingly simple "apply-agree-execute" interaction lies a highly sophisticated underlying workflow. To fully explain this mechanism, we need to first identify the "three core roles" involved in the interaction throughout the process:
When we input a request into the system (e.g., "Please summarize this morning's project meeting"), the following four steps of precise collaboration occur between these three roles:
Step 1: Lightweight Scan (Transferring Metadata)
After a user enters a request, the client tool (Claude Code) doesn't immediately send all the documentation to the large model. Instead, it packages the user's request along with the "names" and "descriptions" of all Agent Skills in the current system (the Metadata layer mentioned in the previous chapter) and sends it to the large model. You can imagine that even if you have installed a dozen or even dozens of Skills, the large model only receives a "lightweight directory." This design greatly saves the model's attention and avoids mutual interference of information.
Step Two: Precise Intent Matching. After receiving the user's request and the "Skill Directory," the big model performs rapid semantic analysis. It discovers that the user's request is to "summarize the meeting," and the directory contains a Skill called "Meeting Summary Assistant," whose description perfectly matches the task. At this point, the big model tells the client tool: "I found that this task can be solved with 'Meeting Summary Assistant.'"
Step 3: Loading the Complete Instruction on Demand. After receiving feedback from the large model, the client tool (Claude Code) will actually enter the dedicated folder of the "Meeting Summary Assistant" to read the complete skill.md text. Please note that this is an extremely crucial design: only at this point will the complete instruction content be read, and the system will only read this one selected Skill. Other unselected Skills remain quietly in the directory, not consuming any resources.
Step 4: Strict Execution and Output Response Finally, the client tool sends the "user's original request" and the "complete skill.md content from the meeting summary assistant" to the large model. This time, the large model is no longer making choices, but entering execution mode. It will strictly follow the rules defined in skill.md (e.g., it must extract attendees, core topics, and final decisions), generate a highly structured response, and then hand it over to the client tool to display to the user.
The workflow in the previous chapter introduced the first core underlying mechanism of Agent Skill— on-demand loading .
Although the names and descriptions of all skills are always visible to the large model, the specific instructions are only actually retrieved into the model's context after the skill is precisely hit.
This significantly conserves valuable token resources. Imagine that even if you deploy a dozen large-scale Skills simultaneously, such as "viral copywriting," "meeting summaries," and "on-chain data analysis," the model initially only needs to perform a very low-cost "directory search." Only after a target is selected will the system feed the corresponding skill.md file to the model. This "on-demand loading" is the first layer of secret to keeping Agent Skills lightweight and efficient.
However, for advanced users who pursue ultimate efficiency, simply achieving the first level of on-demand loading is not enough.
As our business deepens, we often want our skills to become smarter. Take the "Meeting Summary Assistant" as an example. We want it to not only simply summarize the topics but also provide incremental insights: when a meeting decides to spend money, it can directly indicate in the summary whether it complies with the group's financial compliance; when external collaborations are involved, it can automatically alert to potential legal risks. This way, when the team reviews the summary, they can instantly spot key compliance warnings, eliminating the tedious process of double-checking regulations.
However, this presents a fatal contradiction in engineering: for Skill to possess this capability, it must cram all the lengthy "Financial Regulations" and "Legal Provisions" into the skill.md file. This results in an incredibly bloated core instruction file. Even if it's just a purely technical morning meeting, the model is forced to load tens of thousands of words of financial and legal "nonsense," which not only leads to a serious waste of tokens but also easily causes the model to "distract itself."
So, could we implement an additional layer of "on-demand within on-demand" on top of on-demand loading? For example, could the system only show the model financial regulations when the meeting actually touches on the topic of "money"?
The answer is yes. The Reference mechanism in the Agent Skill system was created precisely for this purpose.
The essence of Reference is an external knowledge base triggered by conditions . Let's see how it elegantly solves the pain points mentioned above:
集团财务手册.md , which details the reimbursement standards (e.g., accommodation allowance of 500 yuan/night, meal expenses of 300 yuan/person/day, etc.).skill.md file and add a dedicated "Financial Reminder Rule". We can explicitly define it in natural language: "Trigger only when the meeting content mentions words such as money, budget, procurement, and expenses. After triggering,集团财务手册.md file must be read. Based on the content of this file, please indicate whether the amount in the meeting decision exceeds the limit and specify the corresponding approver."Once the setup is complete, a brilliant, dynamic collaboration begins when we review the budget allocation in our next meeting:
skill.md .集团财务手册.md ?" (Complete the second layer of on-demand loading: Reference dynamically triggered).Please remember the core characteristic of Reference: it is strictly subject to conditions . Conversely, if today's meeting is a technical debriefing to discuss code logic, and has nothing to do with money, then this集团财务手册.md will quietly lie on the hard drive, never consuming a single Token of computing power.
Having discussed the Reference mechanism for solving information overload, let's move on to another killer feature of Agent Skill: code execution (Script) .
For a mature agent, simply "searching for information" and "writing summaries" is not enough; true automation is achieved when it can directly get the job done. This is where scripts come in.
Let's continue using our "Meeting Summary Assistant" as an example. After the summary is written, it usually needs to be synchronized to the company's internal system. To achieve this final step, we create a new Python script named upload.py in the Skill folder, which contains the upload logic for connecting to the company server.
Next, we return to the core skill.md file and add an explicit instruction: "When a user mentions words such as 'upload,' 'sync,' or 'send to server,' you must run the upload.py script to push the generated summary content to the server."
When you say to AI, "The summary is well written, please sync it to the server."
The client tool will immediately request you to execute the upload.py file. But please note a crucial underlying logic: during this process, the AI does not "read" the contents of this code; it merely "executes" it.
This means that even if your Python script contains 10,000 lines of extremely complex business logic, its consumption of the large model context is almost zero . AI is like using a "black box" tool; it only cares about how to start the tool and whether it succeeds in the end, and it doesn't care about how the box works inside.
This leads to the fundamental difference in mechanism between the two advanced features, Reference and Script:
Of course, here's a tip to avoid pitfalls: When writing skill.md , you must explain the script's trigger conditions and execution commands absolutely clearly. If the AI encounters ambiguous instructions and doesn't know how to proceed, it might "save its breath" by trying to peek at the code to find clues, which could cost you your tokens. Therefore, the ironclad rule for writing skills is: define the rules as clearly and comprehensively as possible.
At this point, we've actually pieced together all the core components of Agent Skill. It's time to pause and summarize from a holistic perspective.
If you carefully review the entire loading process, you'll find that Agent Skill's design philosophy is actually an extremely sophisticated, gradual disclosure mechanism . To maximize computational efficiency while maintaining high performance, its system is strictly divided into three layers, with the triggering conditions for each layer becoming progressively tighter:
name and description of all Agent Skills. It's like a "resident directory" for the large model, extremely lightweight. The large model will glance at this layer before accepting each order to complete the initial route matching.skill.md . Only when the first layer confirms the task's ownership will the AI "open" this corresponding layer and load the specific rules into its brain.集团财务手册.md will only be read when a specific condition is triggered in the conversation (such as mentioning "money").upload.py , are only executed when a specific action (such as "uploading") is required.Having discussed the advanced uses of Agent Skills, many readers familiar with underlying AI protocols might have a strong sense of déjà vu: the Script mechanism of Agent Skills seems remarkably similar to the recently popular MCP ( Multi-Channel Programming). Essentially, aren't they both about enabling large models to connect to and manipulate the external world?
Since there is functional overlap, which one should we choose when building a Crypto Research workflow?
Regarding this issue, Anthropic officials once used a very classic statement to point out the most core and essential difference between the two:
This statement hits the nail on the head. MCP is essentially a "data pipeline" responsible for supplying external information to large models in a standardized manner (such as querying the latest block height on a chain, pulling real-time candlestick charts from exchanges, and reading local investment research PDFs). Agent Skill, on the other hand, is essentially a set of "Code of Conduct (SOPs)" responsible for regulating how large models should work after receiving this data (such as stipulating that investment research reports must include token economics models and that output conclusions must include risk warnings).
At this point, some tech enthusiasts might object: "Since Agent Skill can also run Python code, can't I just write some logic in the script to connect to the database or call the API? Agent Skill can completely do the work of MCP!"
Indeed, in terms of engineering implementation, Agent Skill can also pull data. However, it is extremely awkward and unprofessional.
This "lack of professionalism" manifests itself in two fatal dimensions:
Therefore, when building a high-level Crypto Research system, the most powerful solution is not to choose one of two options, but to combine the two into a powerful combination: "MCP for water supply and Skill for brewing".
To give everyone a direct feel for the power of this combination, we'll take opennews-mcp , built by Web3 developer Cryptoxiao, as an example to break down how to use API-enhanced skills to create a fully automated encrypted news intelligence center.
The core logic of this type of Skill is to encapsulate the discrete API capabilities provided by MCP into an intelligent agent oriented towards the final investment research goal through the instruction orchestration of the Skill.
This system endows AI with capabilities in four core modules:
Module 1: News Source Discovery
This is the entry point for AI to understand the limitations of the tool's capabilities. Through the tools in discovery.py , AI can dynamically learn from which channels it can obtain information.
| Utility functions (Python) | SKILL.md description | Code-level capabilities |
|---|---|---|
| get_news_sources | Get all available news source categories | Calling the underlying api.get_engine_tree() returns a complete tree structure containing all news engines (such as news, listing, onchain) and their specific sources (such as Bloomberg, Binance). This allows AI to display optional news sources to the user. |
| list_news_types | List all available news type codes | It also calls api.get_engine_tree(), but flattens it into a simple list, making it easier for AI to use the news_type parameter for precise filtering when calling other tools. |
Module Two: Multi-dimensional News Retrieval
This is the core query module, implemented by news.py , which provides a variety of news retrieval methods, ranging from simple to complex.
| Utility functions (Python) | SKILL.md description | Code-level capabilities |
|---|---|---|
| get_latest_news | Get the most recent crypto news | By directly calling api.search_news() without adding any filtering conditions, the news item "fire hose" can be retrieved. |
| search_news | Search crypto news by keyword | It accepts a keyword parameter and calls api.search_news(query=keyword) to perform a full-text keyword search. |
| search_news_by_coin | Search news related to a specific coin | Accepts a coin parameter (such as "BTC"), and calls api.search_news(coins=[coin]) to perform the most common query by currency. |
| get_news_by_source | Get news from a specific source | Accepts engine_type and news_type, and calls api.search_news(engine_types={...}) to achieve precise filtering by news source. |
| search_news_advanced | Advanced news search with multiple filters | This is a "super tool" that combines multiple parameters such as coins, keyword, engine_types, and has_coin to construct complex api.search_news() requests, enabling multi-dimensional cross-filtering. |
Module 3: AI-Enabled Analysis and Insights
This part of the tool utilizes the AI analysis results already completed by the 6551.io backend, allowing the AI Agent to directly query "opinions" rather than just "facts".
| Utility functions (Python) | SKILL.md description | Code-level capabilities |
|---|---|---|
| get_high_score_news | Get highly-rated news articles | Accepts the min_score parameter, first retrieves a batch of the latest news, then performs secondary filtering within the MCP server, returning only news with aiRating.score greater than or equal to the threshold, and sorting them in descending order of score. |
| get_news_by_signal | Get news filtered by trading signal | It accepts the signal parameter (long, short, neutral) and performs secondary filtering on the retrieved news internally on the server, returning only the results matched by aiRating.signal. |
Key Insight: When the AI Agent invokes these tools, it is unaware that the MCP server internally performs a two-step "get-filter" process. To the AI, it simply invokes a magical tool that directly returns "highly rated news" or "positive news," greatly simplifying its workflow.
Module 4: Real-time News Stream
This is opennews-mcp's "killer" capability, implemented by realtime.py , which gives AI the ability to listen for real-time events.
| Utility functions (Python) | SKILL.md description | Code-level capabilities |
|---|---|---|
| subscribe_latest_news | Subscribe to real-time news updates | The `ws.subscribe_latest()` function establishes a WebSocket long-lived connection and subscribes to specific topics based on parameters such as `coins` and `engine_types`. It then continuously receives push notifications for `wait_seconds` seconds and finally returns all the collected news at once. |
Key insight: This functionality cannot be achieved with a pure Skill because it requires maintaining a stateful, persistent network connection. It can only be done through a dedicated MCP server.
Once these MCP-driven tools are written into the Agent Skill's command flow, your AI officially transforms from a "general chat assistant" into a "Wall Street-level Web3 analyst." It can fully automate complex workflows that previously required researchers to spend hours on:
Workflow Example 1: Rapid Due Diligence (DD) for New Currencies
opentwitter.get_twitter_user to retrieve official Twitter data.opentwitter.get_twitter_kol_followers , we can analyze which top KOLs or VCs have been quietly following the project.opennews.search_news_by_coin to retrieve media reports and public relations actions.opennews.get_high_score_news to remove worthless news flashes and only read high-scoring long articles.Workflow Example 2: Real-time Event-Driven Transaction Signal Discovery
opennews.subscribe_latest_news to establish a WebSocket long connection, precisely listening to news streams whose content contains "ZK" or "Zero-Knowledge Proof" and is associated with a specific token.Thus, by standardizing behavioral logic through Agent Skills and connecting the data arteries through MCP, a highly automated and professional Crypto Research workflow is completely closed.


