Skills 详解:AI Agent 的模块化能力扩展系统
Skills 系统为 AI Agent 提供模块化能力扩展,通过标准化配置和渐进式加载机制解决通用模型缺乏领域知识的问题。文章涵盖系统架构、核心组件、三层加载策略、文件结构规范及 SKILL.md 格式。包含图片处理、数据分析、Web 开发、系统运维四个实战案例,并总结了设计原则、开发规范、性能优化及安全最佳实践,帮助开发者构建专业化 AI Agent 解决方案。

Skills 系统为 AI Agent 提供模块化能力扩展,通过标准化配置和渐进式加载机制解决通用模型缺乏领域知识的问题。文章涵盖系统架构、核心组件、三层加载策略、文件结构规范及 SKILL.md 格式。包含图片处理、数据分析、Web 开发、系统运维四个实战案例,并总结了设计原则、开发规范、性能优化及安全最佳实践,帮助开发者构建专业化 AI Agent 解决方案。

在人工智能快速发展的今天,AI Agent 的应用场景越来越广泛。然而,通用的 AI 模型往往缺乏特定领域的专业知识和操作流程。如何让 AI Agent 具备专业化的能力,成为了一个重要的技术挑战。
Skills 系统正是为了解决这个问题而设计的。它提供了一种标准化的方式,让开发者可以将专业知识、工具集成和操作流程打包成可复用的模块,从而快速构建具备专业能力的 AI Agent。
Skills(技能) 是一种模块化、自包含的能力扩展包,用于为 AI Agent 提供特定领域的专业知识、工作流程和工具集成。可以将 Skills 理解为 AI Agent 的"专业培训教材"——它们将通用的 AI 模型转变为具备特定领域专业能力的专家系统。
| 特性 | 传统插件系统 | Skills 系统 |
|---|---|---|
| 主要用途 | 功能扩展 | 知识和流程传递 |
| 内容形式 | 可执行代码 | 配置文件 + 资源包 |
| 加载方式 | 运行时动态加载 | 按需上下文加载 |
| 适用场景 | 功能增强 | 专业领域指导 |
| 维护成本 | 需要代码维护 | 主要是文档维护 |
在现代 AI Agent 架构中,Skills 扮演着"知识库"和"操作手册"的双重角色:
Skills 将专业领域的知识以结构化的方式组织起来,包括:
Skills 提供了与外部工具和系统集成的标准化方法:
Skills 采用渐进式加载机制,有效管理 AI Agent 的上下文:
Skills 系统在以下场景中表现出色:
通过 Skills 系统,开发者可以快速构建专业化的 AI Agent,大大提高了 AI 在特定领域的应用效果和用户体验。
在接下来的章节中,我们将深入探讨 Skills 的工作原理、配置格式,并通过具体实例展示如何开发和使用 Skills。
理解 Skills 系统的工作原理,是有效开发和使用 Skills 的基础。本章将从架构设计、加载机制、执行流程等多个维度,深入分析 Skills 系统的技术实现。
Skills 系统采用分层架构设计,确保了系统的可扩展性和维护性:
Resource Types
Skills Storage Layer
Skills Management Layer
AI Agent Layer
AI Agent Core
Skills Manager
Skill Loader
Skill Registry
Context Manager
Skill Metadata
Skill Instructions
Bundled Resources
Scripts
References
Assets
Skills 系统的核心创新在于其渐进式加载机制,这种设计有效解决了上下文窗口的限制问题:
用户请求 -> 技能匹配 -> Level 1: 元数据加载 -> 是否匹配?
↓ (否) 尝试下一个技能
↓ (是)
Level 2: 指令加载 -> 需要更多资源?
↓ (否) 执行任务
↓ (是)
Level 3: 按需资源加载
name: "image-processor"
description: "Process and manipulate images including rotation, resizing, format conversion, and quality optimization"
特点:
# Image Processing Skill
## Core Capabilities
- Image rotation and flipping
- Size adjustment and cropping
- Format conversion (JPEG, PNG, WebP)
- Quality optimization
## Usage Patterns
When users request image manipulation tasks, follow these steps:
1. Analyze the input image format and properties
2. Determine the required operations
3. Execute operations in optimal sequence
4. Validate output quality
特点:
Skills 系统采用多维度匹配算法,确保能够准确找到最适合的技能:
def match_skills(user_request, available_skills):
""" 技能匹配算法示例 """
candidates = []
for skill in available_skills:
score = 0
# 语义匹配评分
semantic_score = calculate_semantic_similarity(
user_request, skill.description
)
score += semantic_score * 0.4
# 功能匹配评分
capability_score = match_capabilities(
extract_requirements(user_request), skill.capabilities
)
score += capability_score * 0.3
# 历史表现评分
performance_score = get_historical_performance(skill.id)
score += performance_score * 0.3
candidates.append((skill, score))
# 按评分排序并返回最佳匹配
candidates.sort(key=lambda x: x[1], reverse=True)
return candidates[:3] # 返回前 3 个候选技能
Skill Registry -> Skill Loader -> Skills Manager -> AI Agent -> User 发送请求 -> 分析技能需求 -> 查询匹配技能 -> 返回候选技能列表 -> 加载最佳匹配技能 -> 返回技能内容 -> 提供技能指导 -> 执行任务 -> 返回结果 -> 报告执行状态 -> 更新技能统计
def register_skill(skill_path):
"""注册新技能到系统"""
# 验证技能格式
validate_skill_format(skill_path)
# 解析元数据
metadata = parse_skill_metadata(skill_path)
# 检查依赖关系
check_dependencies(metadata.dependencies)
# 注册到技能注册表
skill_registry.register(metadata)
# 建立索引
build_search_index(metadata)
def activate_skill(skill_id, context):
"""激活技能并加载到上下文"""
# 加载技能指令
instructions = skill_loader.load_instructions(skill_id)
# 注入上下文
context.inject_skill_knowledge(instructions)
# 准备资源访问接口
setup_resource_access(skill_id, context)
return context
def execute_with_skill(skill_id, task, context):
"""在技能指导下执行任务"""
# 获取技能指导
guidance = get_skill_guidance(skill_id, task)
# 执行任务
result = execute_task(task, guidance, context)
# 记录执行结果
log_execution_result(skill_id, task, result)
return result
def cleanup_skill(skill_id, context):
"""清理技能相关资源"""
# 从上下文移除技能内容
context.remove_skill_knowledge(skill_id)
# 清理临时资源
cleanup_temporary_resources(skill_id)
# 更新使用统计
update_usage_statistics(skill_id)
Skills 系统支持复杂的依赖关系管理:
dependencies:
required:
- "file-processor:^1.0.0"
- "image-optimizer:>=2.1.0"
optional:
- "cloud-storage:*"
system_requirements:
python: ">=3.8"
node: ">=14.0"
tools:
- "imagemagick"
- "ffmpeg"
def resolve_dependencies(skill_metadata):
"""解析技能依赖关系"""
dependency_graph = build_dependency_graph(skill_metadata)
# 检查循环依赖
if has_circular_dependency(dependency_graph):
raise CircularDependencyError()
# 拓扑排序确定加载顺序
load_order = topological_sort(dependency_graph)
# 版本兼容性检查
validate_version_compatibility(load_order)
return load_order
{
"1.0.x": ["1.0.0", "1.0.1", "1.0.2"],
"1.1.x": ["1.1.0", "1.1.1"],
"2.0.x": ["2.0.0"]
}
def check_skill_updates():
"""检查技能更新"""
for skill in registered_skills:
latest_version = get_latest_version(skill.id)
if version_compare(latest_version, skill.version) > 0:
if is_compatible_update(skill.version, latest_version):
schedule_update(skill.id, latest_version)
通过这种精心设计的架构和机制,Skills 系统能够高效、可靠地为 AI Agent 提供专业化能力扩展,同时保持良好的可维护性和扩展性。
Skills 的标准化格式是系统高效运行的基础。本章详细介绍 Skills 的文件结构、配置语法和内容组织规范,帮助开发者创建符合标准的高质量 Skills。
每个 Skill 都应该遵循以下标准目录结构:
skill-name/
├── SKILL.md # 核心技能文件(必需)
├── scripts/ # 可执行脚本目录(可选)
│ ├── process_data.py # Python 脚本示例
│ ├── deploy.sh # Shell 脚本示例
│ └── utils.js # JavaScript 工具脚本
├── references/ # 参考文档目录(可选)
│ ├── api_docs.md # API 文档
│ ├── schemas.json # 数据模式定义
│ └── troubleshooting.md # 故障排除指南
└── assets/ # 资源文件目录(可选)
├── templates/ # 模板文件
├── icons/ # 图标资源
└── configs/ # 配置文件模板
| 文件类型 | 命名规范 | 示例 |
|---|---|---|
| 技能目录 | kebab-case | image-processor, data-analyzer |
| 核心文件 | 固定名称 | SKILL.md |
| 脚本文件 | snake_case | process_image.py, backup_data.sh |
| 文档文件 | kebab-case | api-reference.md, user-guide.md |
| 配置文件 | kebab-case | database-config.json, server-settings.yaml |
SKILL.md 文件采用 YAML 前置元数据 + Markdown 内容的格式:
---
name: "skill-name"
description: "Skill description for matching and discovery"
version: "1.0.0"
author: "Developer Name"
tags: ["category1", "category2"]
dependencies:
required: []
optional: []
alwaysApply: false
enabled: true
---
# Skill Title
## Overview
Brief description of what this skill does...
## Usage Instructions
on how to use this skill...
## Examples
Concrete examples of skill usage...
---
name: "data-processor" # 技能唯一标识符
description: "Process and transform data using various algorithms and formats" # 技能描述
---
字段说明:
| 字段 | 类型 | 说明 | 示例 |
|---|---|---|---|
name | string | 技能的唯一标识符,用于系统内部引用 | "image-processor" |
description | string | 技能的功能描述,用于匹配和发现 | "Process images including resize, crop, and format conversion" |
重要提示:
name 必须在系统内唯一,建议使用 kebab-case 格式description 应该准确描述技能的核心功能,避免过于宽泛或模糊---
name: "advanced-data-processor"
description: "Advanced data processing with ML algorithms"
version: "2.1.0" # 版本号
author: "Data Team <[email protected]>" # 作者信息
tags: ["data", "ml", "processing"] # 分类标签
created: "2024-01-15" # 创建日期
updated: "2024-03-20" # 更新日期
license: "MIT" # 许可证
homepage: "https://github.com/org/skill" # 项目主页
dependencies: # 依赖关系
required:
- "file-handler:^1.0.0"
optional:
- "cloud-storage:>=2.0.0"
system_requirements: # 系统要求
python: ">=3.8"
memory: ">=4GB"
tools: ["pandas", "numpy"]
alwaysApply: false # 是否总是应用
enabled: true # 是否启用
priority: 100 # 优先级(数字越大优先级越高)
---
字段详细说明:
| 字段 | 类型 | 默认值 | 说明 |
|---|---|---|---|
version | string | "1.0.0" | 遵循语义化版本规范 |
author | string | - | 作者姓名和联系方式 |
tags | array | [] | 用于分类和搜索的标签 |
created | string | - | 创建日期 (YYYY-MM-DD) |
updated | string | - | 最后更新日期 |
license | string | - | 许可证类型 |
homepage | string | - | 项目主页或文档地址 |
dependencies | object | {} | 依赖关系定义 |
system_requirements | object | {} | 系统环境要求 |
alwaysApply | boolean | false | 是否在所有场景下都加载此技能 |
enabled | boolean | true | 技能是否启用 |
priority | number | 0 | 技能优先级,用于冲突解决 |
dependencies:
required: # 必需依赖
- "file-processor:^2.0.0" # 语义化版本约束
- "image-handler:>=1.5.0,<2.0.0" # 版本范围
optional: # 可选依赖
- "cloud-storage:*" # 任意版本
- "notification-service:~1.2.0" # 兼容版本
版本约束语法:
| 约束 | 含义 | 示例 |
|---|---|---|
^1.2.3 | 兼容版本 | >=1.2.3 <2.0.0 |
~1.2.3 | 近似版本 | >=1.2.3 <1.3.0 |
>=1.2.0 | 最小版本 | >=1.2.0 |
1.2.0 | 精确版本 | =1.2.0 |
* | 任意版本 | 任何可用版本 |
# Skill Title
## Overview
简要概述技能的功能和用途
## Prerequisites
使用此技能的前提条件
## Core Capabilities
技能的核心功能列表
## Usage Patterns
常见的使用模式和场景
## Configuration
配置参数和选项说明
## Examples
具体的使用示例
## Troubleshooting
常见问题和解决方案
## Related Skills
相关技能的引用
1. 使用祈使语气
✅ 正确:
## Usage
To process an image, follow these steps:
1. Validate the input format
2. Apply the transformation
3. Save the result
❌ 错误:
## Usage
You should process images by validating the input format first...
2. 提供具体示例
✅ 正确:
## Examples
### Resize Image
```python
resize_image("input.jpg", width=800, height=600, output="resized.jpg")
❌ 错误:
Use the resize function to change image dimensions.
**3. 结构化信息组织**
```markdown
✅ 正确:
## Configuration Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `quality` | number | 85 | JPEG compression quality (1-100) |
| `format` | string | "jpeg" | Output format (jpeg, png, webp) |
❌ 错误:
## Configuration
You can set quality and format parameters...
用于存放可执行脚本,这些脚本可以被 AI Agent 直接调用执行:
| 脚本类型 | 文件扩展名 | 用途 | 示例 |
|---|---|---|---|
| Python 脚本 | .py | 数据处理、API 调用 | process_data.py |
| Shell 脚本 | .sh | 系统操作、部署 | deploy.sh |
| JavaScript | .js | 前端处理、Node.js | validate.js |
| PowerShell | .ps1 | Windows 系统管理 | setup.ps1 |
#!/usr/bin/env python3
""" Image processing utility script.
This script provides functions for basic image operations including resize, crop, and format conversion.
Usage: python process_image.py --input image.jpg --output result.png --resize 800x600
"""
import argparse
import sys
from pathlib import Path
def main():
"""Main entry point for the script."""
parser = argparse.ArgumentParser(description='Process images')
parser.add_argument('--input', required=True, help='Input image path')
parser.add_argument('--output', required=True, help='Output image path')
parser.add_argument('--resize', help='Resize dimensions (WIDTHxHEIGHT)')
args = parser.parse_args()
try:
# 处理逻辑
process_image(args.input, args.output, args.resize)
print(f"Successfully processed {args.input} -> {args.output}")
except Exception as e:
print(f"Error: {e}", file=sys.stderr)
sys.exit(1)
if __name__ == "__main__":
main()
用于存放详细的参考文档,这些文档会在需要时加载到 AI Agent 的上下文中:
| 文档类型 | 用途 | 示例文件名 |
|---|---|---|
| API 文档 | 接口说明和调用方法 | api-reference.md |
| 数据模式 | 数据结构定义 | schemas.json |
| 配置说明 | 详细的配置参数 | configuration.md |
| 故障排除 | 问题诊断和解决 | troubleshooting.md |
| 最佳实践 | 使用建议和优化 | best-practices.md |
# API Reference
## Authentication
All API calls require authentication using API keys:
```http
GET /api/v1/data
Authorization: Bearer YOUR_API_KEY
Content-Type: application/json
Retrieve data records with optional filtering. Parameters:
limit (integer, optional): Maximum number of records (default: 100)offset (integer, optional): Number of records to skip (default: 0)filter (string, optional): Filter expression
Response:{
"data": [...],
"total": 1234,
"has_more": true
}
Error Codes:
400: Bad Request - Invalid parameters401: Unauthorized - Invalid API key429: Rate Limit Exceeded
#### Assets 目录
用于存放模板文件、配置文件和其他资源,这些文件会被用于生成输出:
##### 资源类型
| 资源类型 | 用途 | 示例 |
| --- | --- | --- |
| 模板文件 | 代码生成模板 | `templates/component.tsx` |
| 配置模板 | 配置文件模板 | `configs/nginx.conf.template` |
| 样式文件 | CSS/样式资源 | `styles/theme.css` |
| 图标资源 | 图标和图片 | `icons/logo.svg` |
##### 模板文件示例
```javascript
// templates/react-component.tsx
import React from 'react';
interface {{ComponentName}}Props {
{{#each props}}
{{name}}:{{type}};
{{/each}}
}
export const {{ComponentName}}: React.FC<{{ComponentName}}Props> = ({
{{#each props}}
{{name}}{{#unless @last}},{{/unless}}
{{/each}}
}) => {
return (
<div className="{{kebabCase componentName}}">{{content}}</div>
);
};
export default {{ComponentName}};
Skills 系统会自动验证以下内容:
def validate_metadata(metadata):
"""验证技能元数据"""
required_fields = ['name', 'description']
for field in required_fields:
if field not in metadata:
raise ValidationError(f"Missing required field: {field}")
# 验证名称格式
if not re.match(r'^[a-z0-9-]+$', metadata['name']):
raise ValidationError("Name must use kebab-case format")
# 验证版本格式
if 'version' in metadata:
if not re.match(r'^\d+\.\d+\.\d+$', metadata['version']):
raise ValidationError("Version must follow semantic versioning")
def validate_dependencies(dependencies):
"""验证依赖关系"""
for dep in dependencies.get('required', []):
if not skill_exists(dep):
raise ValidationError(f"Required dependency not found: {dep}")
# 检查循环依赖
if has_circular_dependency(dependencies):
raise ValidationError("Circular dependency detected")
def validate_file_structure(skill_path):
"""验证文件结构"""
skill_file = skill_path / "SKILL.md"
if not skill_file.exists():
raise ValidationError("SKILL.md file is required")
# 验证可选目录
for directory in ['scripts', 'references', 'assets']:
dir_path = skill_path / directory
if dir_path.exists() and not dir_path.is_dir():
raise ValidationError(f"{directory} must be a directory")
通过遵循这些格式规范和最佳实践,开发者可以创建高质量、易维护的 Skills,为 AI Agent 提供强大而可靠的能力扩展。
理论知识需要通过实际案例来加深理解。本章通过四个不同领域的完整 Skills 实例,展示如何设计和实现高质量的技能模块,涵盖图片处理、数据分析、Web 开发和系统运维等常见应用场景。
图片处理技能提供完整的图像操作能力,包括格式转换、尺寸调整、质量优化和批量处理等功能。
---
name: "image-processor"
description: "Comprehensive image processing including resize, crop, format conversion, and batch operations"
version: "1.2.0"
author: "Media Team <[email protected]>"
tags: ["image", "processing", "media", "conversion"]
dependencies:
required: []
optional:
- "cloud-storage:>=2.0.0"
system_requirements:
python: ">=3.8"
tools: ["pillow", "imagemagick"]
alwaysApply: false
enabled: true
priority: 80
---
# Image Processing Skill
## Overview
This skill provides comprehensive image processing capabilities for AI agents, enabling automatic image manipulation, optimization, and batch processing operations.
## Core Capabilities
### Format Conversion
- JPEG ↔ PNG ↔ WebP ↔ TIFF
- Automatic format detection
- Quality preservation options
### Size Operations
- Intelligent resizing with aspect ratio preservation
- Smart cropping with focus detection
- Thumbnail generation with multiple sizes
### Quality Optimization
- Lossless compression
- Progressive JPEG encoding
- WebP optimization for web delivery
### Batch Processing
- Directory-based batch operations
- Parallel processing for large datasets
- Progress tracking and error handling
## Usage Patterns
### Single Image Processing
When processing individual images:
1. Validate input format and accessibility
2. Determine required operations based on user intent
3. Apply transformations in optimal sequence
4. Validate output quality and file size
### Batch Operations
For multiple images:
1. Scan source directory for supported formats
2. Create processing queue with priority ordering
3. Execute operations with parallel processing
4. Generate summary report with statistics
## Configuration Parameters
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `max_width` | integer | 1920 | Maximum output width in pixels |
| `max_height` | integer | 1080 | Maximum output height in pixels |
| `quality` | integer | 85 | JPEG/WebP quality (1-100) |
| `preserve_metadata` | boolean | false | Keep EXIF data in output |
| | boolean | true | Use progressive encoding |
| | boolean | true | Enable optimization algorithms |
: Provide format conversion suggestions
: Offer compression or resizing options
: Attempt repair or request replacement
: Use streaming processing for large files
Automatic fallback to alternative processing methods
Graceful degradation with quality trade-offs
Detailed error reporting with suggested solutions
# scripts/process_image.py
#!/usr/bin/env python3
""" Advanced image processing script with comprehensive format support. """
import argparse
import sys
from pathlib import Path
from PIL import Image, ImageOps
import logging
# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class ImageProcessor:
"""高级图像处理器"""
SUPPORTED_FORMATS = {'.jpg', '.jpeg', '.png', '.webp', '.tiff', '.bmp'}
def __init__(self, max_width=1920, max_height=1080, quality=85):
self.max_width = max_width
self.max_height = max_height
self.quality = quality
def process_image(self, input_path, output_path, operations=None):
"""处理单个图像"""
try:
with Image.open(input_path) as img:
# 自动旋转(基于 EXIF)
img = ImageOps.exif_transpose(img)
# 执行操作
if operations:
for operation in operations:
img = self._apply_operation(img, operation)
# 保存结果
save_kwargs = {: .quality, : }
output_path.suffix.lower() == :
save_kwargs[] =
img.save(output_path, **save_kwargs)
logger.info()
Exception e:
logger.error()
():
op_type = operation.get()
op_type == :
._resize_image(img, operation)
op_type == :
._crop_image(img, operation)
op_type == :
img.rotate(operation.get(, ), expand=)
:
logger.warning()
img
():
target_width = operation.get(, .max_width)
target_height = operation.get(, .max_height)
preserve_aspect = operation.get(, )
preserve_aspect:
img.thumbnail((target_width, target_height), Image.Resampling.LANCZOS)
:
img = img.resize((target_width, target_height), Image.Resampling.LANCZOS)
img
():
input_path = Path(input_dir)
output_path = Path(output_dir)
output_path.mkdir(parents=, exist_ok=)
image_files = []
ext .SUPPORTED_FORMATS:
image_files.extend(input_path.glob())
image_files.extend(input_path.glob())
logger.info()
success_count =
img_file image_files:
:
output_file = output_path / img_file.name
.process_image(img_file, output_file, operations)
success_count +=
Exception e:
logger.error()
logger.info()
():
parser = argparse.ArgumentParser(description=)
parser.add_argument(, required=, =)
parser.add_argument(, required=, =)
parser.add_argument(, =)
parser.add_argument(, =, default=, =)
parser.add_argument(, action=, =)
args = parser.parse_args()
processor = ImageProcessor(quality=args.quality)
operations = []
args.resize:
width, height = (, args.resize.split())
operations.append({: , : width, : height, : })
:
args.batch:
processor.batch_process(args., args.output, operations)
:
processor.process_image(Path(args.), Path(args.output), operations)
Exception e:
logger.error()
sys.exit()
__name__ == :
main()
数据分析技能提供全面的数据处理和分析能力,支持多种数据格式,包含统计分析、可视化和报告生成功能。
---
name: "data-analyzer"
description: "Comprehensive data analysis including statistics, visualization, and automated reporting"
version: "2.0.1"
author: "Analytics Team <[email protected]>"
tags: ["data", "analysis", "statistics", "visualization"]
dependencies:
required:
- "file-processor:^1.0.0"
optional:
- "database-connector:>=2.1.0"
- "cloud-storage:*"
system_requirements:
python: ">=3.9"
memory: ">=8GB"
tools: ["pandas", "numpy", "matplotlib", "seaborn"]
alwaysApply: false
enabled: true
priority: 90
---
# Data Analysis Skill
## Overview
Advanced data analysis capabilities for AI agents, providing statistical analysis, data visualization, and automated report generation from various data sources.
## Core Capabilities
### Data Import & Processing
- Multi-format support: CSV, Excel, JSON, Parquet, SQL databases
- Automatic data type detection and conversion
- Missing value handling with multiple strategies
- Data validation and quality assessment
### Statistical Analysis
- Descriptive statistics with distribution analysis
- Correlation analysis and feature relationships
- Hypothesis testing and significance analysis
- Time series analysis and forecasting
### Data Visualization
- Automatic chart type selection based on data characteristics
- Interactive dashboards with drill-down capabilities
- Statistical plots: histograms, box plots, scatter matrices
- Time series visualizations with trend analysis
### Report Generation
- Automated insights discovery and narrative generation
- Executive summary with key findings
- Statistical appendix with detailed analysis
- Export to multiple formats: PDF, HTML, PowerPoint
## Usage Patterns
### Exploratory Data Analysis
For initial data exploration:
1. Load and validate data integrity
2. Generate descriptive statistics summary
3. Identify patterns, outliers, and anomalies
4. Create visualization suite for key relationships
5. Produce preliminary insights report
### Comparative Analysis
For comparing datasets or time periods:
1. Align data structures and time ranges
2. Calculate comparative metrics and ratios
3. Perform statistical significance tests
Generate side-by-side visualizations
Summarize key differences and trends
For forecasting and trend analysis:
Prepare time series data with proper indexing
Apply appropriate forecasting models
Validate model performance with cross-validation
Generate confidence intervals and scenarios
Create forecast visualizations with uncertainty bands
| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| | float | 0.95 | Statistical confidence level |
| | float | 0.1 | Maximum missing data ratio |
| | string | "iqr" | Outlier detection method |
| | string | "seaborn" | Visualization style theme |
| | boolean | true | Enable automatic insights |
| | string | "pdf" | Default report format |
# scripts/analyze_data.py
#!/usr/bin/env python3
""" Comprehensive data analysis script with automated insights. """
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from pathlib import Path
import argparse
import json
from datetime import datetime
class DataAnalyzer:
"""高级数据分析器"""
def __init__(self, confidence_level=0.95, missing_threshold=0.1):
self.confidence_level = confidence_level
self.missing_threshold = missing_threshold
self.insights = []
# 设置可视化样式
plt.style.use('seaborn-v0_8')
sns.set_palette("husl")
def load_data(self, file_path):
"""智能加载数据文件"""
path = Path(file_path)
if path.suffix.lower() == '.csv':
return pd.read_csv(path)
elif path.suffix.lower() in ['.xlsx', '.xls']:
return pd.read_excel(path)
elif path.suffix.lower() == '.json':
return pd.read_json(path)
path.suffix.lower() == :
pd.read_parquet(path)
:
ValueError()
():
quality_report = {
: (df),
: (df.columns),
: {},
: {},
: df.duplicated().()
}
col df.columns:
missing_count = df[col].isnull().()
missing_ratio = missing_count / (df)
quality_report[][col] = {: missing_count, : missing_ratio}
missing_ratio > .missing_threshold:
.insights.append()
col df.columns:
quality_report[][col] = (df[col].dtype)
quality_report
():
numeric_cols = df.select_dtypes(include=[np.number]).columns
categorical_cols = df.select_dtypes(include=[, ]).columns
analysis = {: {}, : {}}
(numeric_cols) > :
analysis[] = df[numeric_cols].describe().to_dict()
col numeric_cols:
Q1 = df[col].quantile()
Q3 = df[col].quantile()
IQR = Q3 - Q1
outliers = df[(df[col] < Q1 - * IQR) | (df[col] > Q3 + * IQR)]
(outliers) > :
outlier_ratio = (outliers) / (df)
.insights.append()
(categorical_cols) > :
col categorical_cols:
value_counts = df[col].value_counts()
analysis[][col] = {
: df[col].nunique(),
: value_counts.index[] (value_counts) > ,
: value_counts.iloc[] (value_counts) >
}
analysis
():
numeric_cols = df.select_dtypes(include=[np.number]).columns
(numeric_cols) < :
correlation_matrix = df[numeric_cols].corr()
strong_correlations = []
i ((correlation_matrix.columns)):
j (i + , (correlation_matrix.columns)):
corr_value = correlation_matrix.iloc[i, j]
(corr_value) > :
strong_correlations.append({
: correlation_matrix.columns[i],
: correlation_matrix.columns[j],
: corr_value
})
strong_correlations:
corr strong_correlations:
.insights.append(
)
correlation_matrix
():
output_path = Path(output_dir)
output_path.mkdir(parents=, exist_ok=)
numeric_cols = df.select_dtypes(include=[np.number]).columns
categorical_cols = df.select_dtypes(include=[, ]).columns
(numeric_cols) > :
fig, axes = plt.subplots(, , figsize=(, ))
fig.suptitle(, fontsize=)
i, col (numeric_cols[:]):
row, col_idx = (i, )
axes[row, col_idx].hist(df[col].dropna(), bins=, alpha=)
axes[row, col_idx].set_title()
axes[row, col_idx].set_xlabel(col)
axes[row, col_idx].set_ylabel()
plt.tight_layout()
plt.savefig(output_path / , dpi=, bbox_inches=)
plt.close()
(numeric_cols) > :
plt.figure(figsize=(, ))
correlation_matrix = df[numeric_cols].corr()
sns.heatmap(correlation_matrix, annot=, cmap=, center=, square=, linewidths=)
plt.title()
plt.tight_layout()
plt.savefig(output_path / , dpi=, bbox_inches=)
plt.close()
(categorical_cols) > :
fig, axes = plt.subplots(, , figsize=(, ))
fig.suptitle(, fontsize=)
i, col (categorical_cols[:]):
i >= :
row, col_idx = (i, )
value_counts = df[col].value_counts().head()
axes[row, col_idx].bar(((value_counts)), value_counts.values)
axes[row, col_idx].set_title()
axes[row, col_idx].set_xlabel(col)
axes[row, col_idx].set_ylabel()
axes[row, col_idx].set_xticks(((value_counts)))
axes[row, col_idx].set_xticklabels(value_counts.index, rotation=)
plt.tight_layout()
plt.savefig(output_path / , dpi=, bbox_inches=)
plt.close()
():
quality_report = .analyze_data_quality(df)
descriptive_stats = .descriptive_analysis(df)
correlation_matrix = .correlation_analysis(df)
report = {
: datetime.now().isoformat(),
: quality_report,
: descriptive_stats,
: .insights,
: ._generate_recommendations(df)
}
(output_path, , encoding=) f:
json.dump(report, f, indent=, ensure_ascii=, default=)
report
():
recommendations = []
missing_cols = [col col df.columns df[col].isnull().() / (df) > .missing_threshold]
missing_cols:
recommendations.append()
numeric_cols = df.select_dtypes(include=[np.number]).columns
col numeric_cols:
skewness = df[col].skew()
(skewness) > :
recommendations.append()
recommendations
():
parser = argparse.ArgumentParser(description=)
parser.add_argument(, required=, =)
parser.add_argument(, required=, =)
parser.add_argument(, =, default=, =)
args = parser.parse_args()
analyzer = DataAnalyzer(confidence_level=args.confidence)
:
df = analyzer.load_data(args.)
()
output_dir = Path(args.output)
output_dir.mkdir(parents=, exist_ok=)
analyzer.create_visualizations(df, output_dir / )
report = analyzer.generate_report(df, output_dir / )
()
()
Exception e:
()
sys.exit()
__name__ == :
main()
Web 开发技能提供现代 Web 应用的快速构建能力,包含前端框架集成、后端 API 开发和部署自动化。
---
name: "web-builder"
description: "Modern web application development with React, Node.js, and automated deployment"
version: "3.1.2"
author: "Frontend Team <[email protected]>"
tags: ["web", "react", "nodejs", "deployment", "fullstack"]
dependencies:
required:
- "file-processor:^1.0.0"
optional:
- "database-connector:>=2.0.0"
- "cloud-deployment:>=1.5.0"
system_requirements:
node: ">=16.0.0"
npm: ">=8.0.0"
memory: ">=4GB"
alwaysApply: false
enabled: true
priority: 85
---
# Web Development Skill
## Overview
Comprehensive web development capabilities for building modern, responsive web applications using React, Node.js, and automated deployment pipelines.
## Core Capabilities
### Frontend Development
- React component generation with TypeScript support
- Responsive design with CSS-in-JS or Tailwind CSS
- State management with Redux Toolkit or Zustand
- Routing with React Router and protected routes
- Form handling with validation and error management
### Backend Development
- RESTful API development with Express.js
- GraphQL API with Apollo Server
- Authentication and authorization (JWT, OAuth)
- Database integration (MongoDB, PostgreSQL, MySQL)
- File upload and processing capabilities
### Development Tools
- Hot reload development server
- Code linting and formatting (ESLint, Prettier)
- Testing setup (Jest, React Testing Library)
- Build optimization and bundling
- Environment configuration management
### Deployment & DevOps
- Docker containerization
- CI/CD pipeline configuration
- Cloud platform deployment (Vercel, Netlify, AWS)
- Performance monitoring and analytics
- Error tracking and logging
## Usage Patterns
### Single Page Application (SPA)
For building interactive web applications:
1. Initialize React project with TypeScript template
2. Set up routing structure and navigation
3. Create reusable component library
4. Implement state management and API integration
5. Add authentication and user management
6. Configure build and deployment pipeline
For complete web solutions:
Set up monorepo structure with frontend and backend
Design database schema and API endpoints
Implement backend services with proper validation
Create frontend components consuming APIs
Add comprehensive testing coverage
Configure production deployment with monitoring
For content-focused websites:
Set up Next.js or Gatsby project
Configure content management system integration
Create dynamic page generation from content
Optimize for SEO and performance
Set up automated content deployment
Add analytics and performance monitoring
#### Node.js API Template
```json
{
"name": "nodejs-api-template",
"structure": {
"src/": {
"controllers/": "Request handlers",
"models/": "Data models",
"routes/": "API route definitions",
"middleware/": "Custom middleware",
"services/": "Business logic",
"config/": "Configuration files",
"utils/": "Utility functions"
},
"tests/": "API tests",
"docs/": "API documentation"
}
}
系统运维技能提供全面的服务器管理和自动化运维能力,包含监控、部署、备份和故障处理等功能。
---
name: "system-admin"
description: "Comprehensive system administration including monitoring, deployment, backup, and troubleshooting"
version: "2.3.0"
author: "DevOps Team <[email protected]>"
tags: ["devops", "monitoring", "deployment", "backup", "linux"]
dependencies:
required:
- "file-processor:^1.0.0"
optional:
- "notification-service:>=1.2.0"
- "cloud-storage:>=2.0.0"
system_requirements:
os: "linux"
shell: "bash"
tools: ["docker", "systemctl", "crontab"]
alwaysApply: false
enabled: true
priority: 95
---
#!/bin/bash
# scripts/system_monitor.sh
# 系统监控和健康检查脚本
set -euo pipefail
# 配置参数
ALERT_THRESHOLD_CPU=80
ALERT_THRESHOLD_MEMORY=85
ALERT_THRESHOLD_DISK=90
LOG_FILE="/var/log/system_monitor.log"
REPORT_FILE="/tmp/system_report.json"
# 日志函数
log(){
echo "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
# 检查 CPU 使用率
check_cpu_usage(){
local cpu_usage
cpu_usage=$(top -bn1 | grep "Cpu(s)" | awk '{print $2}' | sed 's/%us,//')
cpu_usage=${cpu_usage%.*} # 去除小数部分
echo "\"cpu_usage\": $cpu_usage,"
if [ "$cpu_usage" -gt "$ALERT_THRESHOLD_CPU" ]; then
log "WARNING: High CPU usage detected: ${cpu_usage}%"
return 1
fi
return 0
}
# 检查内存使用率
check_memory_usage(){
local memory_info
memory_info=$(free | grep Mem)
local total=$(echo $memory_info | awk '{print $2}')
local used=$( | awk )
memory_usage=$((used * / total))
[ -gt ];
1
0
}
(){
disk_usage
disk_usage=$( -h / | awk | sed )
[ -gt ];
1
0
}
(){
services=( )
service_status=
service ;
systemctl is-active --quiet ;
service_status+=
service_status+=
service_status=
service_status+=
-e
}
(){
ping_result
ping -c 1 8.8.8.8 >/dev/null 2>&1;
ping_result=
ping_result=
}
(){
timestamp=$( -u +)
> <<
}
(){
exit_code=0
check_cpu_usage || exit_code=1
check_memory_usage || exit_code=1
check_disk_usage || exit_code=1
check_network || exit_code=1
check_services || exit_code=1
generate_report
[ -eq 0 ];
}
[[ == ]];
main
#!/bin/bash
# scripts/deploy_application.sh
# 应用部署自动化脚本
set -euo pipefail
# 配置参数
APP_NAME="${APP_NAME:-myapp}"
DEPLOY_ENV="${DEPLOY_ENV:-production}"
DOCKER_REGISTRY="${DOCKER_REGISTRY:-registry.company.com}"
BACKUP_DIR="/opt/backups"
LOG_FILE="/var/log/deploy_${APP_NAME}.log"
# 颜色输出
RED='\033[0;31m'
GREEN='\033[0;32m'
YELLOW='\033[1;33m'
NC='\033[0m' # No Color
# 日志函数
log(){
echo -e "[$(date '+%Y-%m-%d %H:%M:%S')] $1" | tee -a "$LOG_FILE"
}
error(){
log "${RED}ERROR: $1${NC}"
}
success(){
log "${GREEN}SUCCESS: $1${NC}"
}
warning(){
log "${YELLOW}WARNING: $1${NC}"
}
# 预检查函数
pre_deployment_checks(){
log "Running pre-deployment checks..."
! docker info >/dev/null 2>&1;
error
1
disk_usage=$( / | awk | sed )
[ -gt 85 ];
error
1
! curl -s --connect-timeout 5 >/dev/null;
error
1
success
}
(){
backup_name=
backup_path=
-p
[ -d ];
-r
-v mysqldump >/dev/null 2>&1;
mysqldump --single-transaction > 2>/dev/null ||
docker ps --filter --format >
success
>
}
(){
image_tag=
full_image=
docker pull ;
success
>
error
1
}
(){
containers=$(docker ps -q --filter )
[ -n ];
docker stop
docker
success
}
(){
image_name=$( )
docker run -d \
--name \
--restart unless-stopped \
-p 8080:8080 \
-v \
-v \
--env-file \
success
}
(){
max_attempts=30
attempt=1
[ -le ];
curl -s -f >/dev/null;
success
0
10
((attempt++))
error
1
}
(){
error
new_containers=$(docker ps -q --filter )
[ -n ];
docker stop
docker
warning
1
}
(){
docker images --format | -n +4 | xargs -r docker rmi
docker image prune -f
success
}
(){
image_tag=
rollback ERR
pre_deployment_checks
backup_current_version
pull_new_image
stop_old_containers
start_new_container
health_check;
cleanup_old_images
success
rollback
-f
}
[[ == ]];
main
通过这四个完整的实例,我们可以看到 Skills 系统在不同领域的应用方式。每个实例都展示了从配置定义到具体实现的完整过程,体现了 Skills 系统的灵活性和强大功能。这些实例可以作为开发新 Skills 的参考模板,帮助开发者快速上手并创建高质量的技能模块。
经过大量的实践和优化,我们总结出了一套完整的 Skills 开发最佳实践。本章将从设计原则、开发规范、性能优化、维护策略等多个维度,为开发者提供全面的指导建议。
每个 Skill 应该专注于一个特定的领域或任务,避免功能过于复杂或职责不清。
# ✅ 良好设计:专注于图像处理的技能
name: "image-processor"
description: "Image processing including resize, crop, format conversion, and optimization"
# ❌ 避免的设计:职责过于宽泛的技能
name: "media-handler"
description: "Handle images, videos, audio files, documents, and web scraping"
实施建议:
Skills 应该包含执行任务所需的所有信息,减少对外部依赖的需求。
核心要素:
实施策略:
# 在 SKILL.md 中包含完整信息
## Prerequisites
- System requirements
- Required tools and libraries
- Environment setup instructions
## Configuration
- All parameters with default values
- Parameter validation rules
- Environment-specific configurations
## Error Handling
- Common error scenarios
- Troubleshooting steps
- Recovery procedures
Skills 的内容组织应该从简单到复杂,支持不同层次的使用需求。
内容层次结构:
Level 1: 基础使用 (SKILL.md 主体)
├─ 快速开始指南
├─ 基本配置说明
└─ 常用操作示例
Level 2: 高级功能 (references/)
├─ 详细 API 文档
├─ 高级配置选项
└─ 复杂场景处理
Level 3: 专家级定制 (scripts/ + assets/)
├─ 自定义脚本
├─ 配置模板
└─ 扩展工具
✅ 推荐命名:
- "database-migrator"
- "api-documentation-generator"
- "performance-monitor"
❌ 避免的命名:
- "db_mig"
- "APIDocGen"
- "perf-mon-sys"
技能目录/
├── SKILL.md # 固定名称,大写
├── scripts/
│ ├── process_data.py # snake_case
│ └── deploy_service.sh # snake_case
├── references/
│ ├── api-reference.md # kebab-case
│ └── troubleshooting.md # kebab-case
└── assets/
├── config-template.yaml # kebab-case
└── sample-data.json # kebab-case
---
name: "skill-name"
description: "Clear, specific description focusing on what the skill does and when to use it"
version: "1.0.0" # 语义化版本
author: "Team Name <email>" # 包含联系方式
tags: ["category", "technology"] # 3-5 个相关标签
dependencies: # 明确的依赖关系
required: ["essential-skill:^1.0.0"]
optional: ["enhancement-skill:*"]
---
# Skill Title
## Overview (必需)
简洁的功能概述,1-2 段文字
## Prerequisites (推荐)
使用前提条件和环境要求
## Core Capabilities (必需)
核心功能列表,使用项目符号
## Usage Patterns (必需)
常见使用场景和工作流程
## Configuration (如适用)
配置参数说明,使用表格格式
## Examples (必需)
具体的使用示例,包含完整代码
## Troubleshooting (推荐)
常见问题和解决方案
## Related Skills (如适用)
相关技能的引用链接
✅ 完整的代码示例:
```python
# 完整的可执行示例
from image_processor import ImageProcessor
processor = ImageProcessor(quality=85)
result = processor.resize_image(
input_path="input.jpg",
output_path="output.jpg",
width=800,
height=600
)
print(f"Processing completed: {result}")
❌ 不完整的示例:
# 缺少导入和上下文
processor.resize_image(input_path, output_path)
#### 3. 版本管理规范
##### 语义化版本控制
```markdown
版本格式:MAJOR.MINOR.PATCH
MAJOR: 不兼容的 API 变更
- 改变核心接口
- 删除功能
- 修改配置格式
MINOR: 向后兼容的功能新增
- 添加新功能
- 增强现有功能
- 新增配置选项
PATCH: 向后兼容的问题修正
- 修复错误
- 改进文档
- 性能优化
# CHANGELOG.md
## [2.1.0] - 2024-03-15
### Added
- New batch processing capability
- Support for WebP format conversion
- Automatic quality optimization
### Changed
- Improved error handling for large files
- Updated dependency requirements
### Fixed
- Memory leak in batch processing
- Incorrect aspect ratio calculation
## [2.0.1] - 2024-02-28
### Fixed
- Critical security vulnerability in file upload
- Performance issue with large image processing
# 优化前 - 描述过于详细
description: "This comprehensive skill provides advanced image processing capabilities including but not limited to resizing, cropping, format conversion, quality optimization, batch processing, metadata handling, and integration with cloud storage services for modern web applications and mobile platforms"
# 优化后 - 简洁明确
description: "Image processing including resize, crop, format conversion, and batch operations"
# SKILL.md - 保持核心内容简洁
## Quick Start
Basic usage patterns and common operations
## Configuration
Essential parameters only
# references/advanced-guide.md - 详细内容分离
## Advanced Configuration
Detailed parameter explanations and edge cases
## Performance Tuning
Optimization strategies and benchmarks
# 优化前 - 每次重新计算
def process_image(image_path):
config = load_config() # 每次都加载配置
processor = ImageProcessor(config) # 每次都创建实例
return processor.process(image_path)
# 优化后 - 缓存和复用
class OptimizedImageProcessor:
_instance = None
_config = None
@classmethod
def get_instance(cls):
if cls._instance is None:
cls._config = load_config()
cls._instance = ImageProcessor(cls._config)
return cls._instance
def process_image(self, image_path):
processor = self.get_instance()
return processor.process(image_path)
# 压缩大型参考文档
gzip -9 references/large-documentation.md
# 优化图片资源
optipng assets/icons/*.png
jpegoptim --max=85 assets/images/*.jpg
# 压缩配置模板
tar -czf assets/templates.tar.gz assets/templates/
dependencies:
required:
- "core-utilities:^1.0.0" # 核心依赖
optional:
- "cloud-storage:>=2.0.0" # 按需加载
- "advanced-analytics:*" # 高级功能
conditional: # 条件依赖
- condition: "environment == 'production'"
dependencies: ["monitoring-tools:^1.5.0"]
- condition: "features.includes('ml')"
dependencies: ["ml-toolkit:>=3.0.0"]
# tests/test_skill_validation.py
import pytest
from skill_validator import SkillValidator
class TestSkillValidation:
def setup_method(self):
self.validator = SkillValidator()
def test_metadata_format(self):
"""测试元数据格式正确性"""
skill_path = "path/to/skill"
result = self.validator.validate_metadata(skill_path)
assert result.is_valid
assert "name" in result.metadata
assert "description" in result.metadata
def test_file_structure(self):
"""测试文件结构完整性"""
skill_path = "path/to/skill"
result = self.validator.validate_structure(skill_path)
assert result.has_skill_md
assert result.scripts_valid
assert result.references_accessible
def test_dependency_resolution(self):
"""测试依赖关系解析"""
skill_path = "path/to/skill"
result = self.validator.validate_dependencies(skill_path)
assert not result.has_circular_dependencies
assert result.all_dependencies_available
# tests/test_skill_functionality.py
class TestImageProcessorSkill:
def test_basic_resize(self):
"""测试基本调整大小功能"""
processor = ImageProcessor()
result = processor.resize_image(
"test_images/sample.jpg",
"output/resized.jpg",
width=800,
height=600
)
assert result.success
assert result.output_exists
assert result.dimensions == (800, 600)
def test_batch_processing(self):
"""测试批量处理功能"""
processor = ImageProcessor()
result = processor.batch_process(
input_dir="test_images/",
output_dir="output/batch/",
operations=[{"type": "resize", "width": 400}]
)
assert result.success_rate > 0.95
assert result.processed_count > 0
#!/bin/bash
# scripts/quality_check.sh
# Python 代码质量检查
echo "Running Python code quality checks..."
flake8 scripts/*.py --max-line-length=88
black --check scripts/*.py
mypy scripts/*.py
# Shell 脚本检查
echo "Running shell script checks..."
shellcheck scripts/*.sh
# 文档检查
echo "Running documentation checks..."
markdownlint *.md references/*.md
# 安全检查
echo "Running security checks..."
bandit -r scripts/
# tests/test_performance.py
import time
import pytest
from memory_profiler import profile
class TestPerformance:
@pytest.mark.performance
def test_image_processing_speed(self):
"""测试图像处理速度"""
processor = ImageProcessor()
start_time = time.time()
processor.resize_image("large_image.jpg", "output.jpg", 1920, 1080)
processing_time = time.time() - start_time
# 性能要求:大图处理不超过 5 秒
assert processing_time < 5.0
@profile
def test_memory_usage(self):
"""测试内存使用情况"""
processor = ImageProcessor()
# 批量处理 100 张图片
for i in range(100):
processor.resize_image(f"test_{i}.jpg", f"output_{i}.jpg")
# 内存使用应该保持稳定,不出现内存泄漏
开发 -> 测试 -> 发布 -> 维护 -> 更新 -> 废弃 -> 监控 -> 优化 -> 迁移指南 -> 替代方案
# 版本支持矩阵
version_support:
"3.x.x":
status: "active"
support_until: "2025-12-31"
security_updates: true
feature_updates: true
"2.x.x":
status: "maintenance"
support_until: "2024-12-31"
security_updates: true
feature_updates: false
"1.x.x":
status: "deprecated"
support_until: "2024-06-30"
security_updates: true
feature_updates: false
# monitoring/skill_analytics.py
class SkillAnalytics:
def track_usage(self, skill_name, operation, success, duration):
"""记录技能使用情况"""
metrics = {
'skill_name': skill_name,
'operation': operation,
'success': success,
'duration': duration,
'timestamp': datetime.utcnow(),
'user_agent': self.get_user_context()
}
self.metrics_collector.record(metrics)
def generate_usage_report(self, skill_name, period='30d'):
"""生成使用情况报告"""
data = self.metrics_collector.query(skill_name, period)
return {
'total_uses': len(data),
'success_rate': sum(1 for d in data if d['success']) / len(data),
'avg_duration': sum(d['duration'] for d in data) / len(data),
'popular_operations': self.get_top_operations(data),
'error_patterns': self.analyze_errors(data)
}
# monitoring/performance_monitor.py
class PerformanceMonitor:
def __init__(self):
self.thresholds = {
'response_time': 5.0, # 秒
'memory_usage': 1024, # MB
'error_rate': 0.05 # 5%
}
def check_performance(self, skill_name):
"""检查技能性能指标"""
metrics = self.get_recent_metrics(skill_name)
alerts = []
if metrics['avg_response_time'] > self.thresholds['response_time']:
alerts.append(f"High response time: {metrics['avg_response_time']:.2f}s")
if metrics['peak_memory'] > self.thresholds['memory_usage']:
alerts.append(f"High memory usage: {metrics['peak_memory']}MB")
if metrics['error_rate'] > self.thresholds['error_rate']:
alerts.append(f"High error rate: {metrics['error_rate']:.1%}")
return alerts
# CONTRIBUTING.md
## 如何贡献
### 报告问题
1. 检查现有 Issues 避免重复
2. 使用问题模板提供详细信息
3. 包含复现步骤和环境信息
### 提交改进
1. Fork 项目并创建特性分支
2. 遵循代码规范和测试要求
3. 提交 Pull Request 并描述变更
### 代码审查标准
- 功能正确性验证
- 代码质量和规范检查
- 性能影响评估
- 文档完整性确认
# 反馈收集配置
feedback_channels:
github_issues:
url: "https://github.com/org/skills/issues"
types: ["bug", "enhancement", "question"]
community_forum:
url: "https://forum.company.com/skills"
categories: ["general", "development", "showcase"]
direct_contact:
email: "[email protected]"
response_time: "48h"
# 避免在配置中硬编码敏感信息
❌ 错误做法:
database:
host: "prod-db.company.com"
username: "admin"
password: "secret123"
✅ 正确做法:
database:
host: "${DB_HOST}"
username: "${DB_USER}"
password: "${DB_PASSWORD}"
# scripts/secure_processor.py
import os
import stat
class SecureFileProcessor:
def __init__(self):
# 设置安全的文件权限
self.secure_permissions = stat.S_IRUSR | stat.S_IWUSR # 仅所有者读写
def create_temp_file(self, content):
"""创建安全的临时文件"""
temp_path = "/tmp/secure_temp_file"
# 创建文件并设置安全权限
with open(temp_path, 'w') as f:
f.write(content)
os.chmod(temp_path, self.secure_permissions)
return temp_path
def validate_input(self, file_path):
"""验证输入文件安全性"""
# 检查路径遍历攻击
if ".." in file_path or file_path.startswith("/"):
raise SecurityError("Invalid file path")
# 检查文件类型
allowed_extensions = {'.jpg', '.png', '.gif', '.pdf'}
if not any(file_path.endswith(ext) for ext in allowed_extensions):
raise SecurityError("File type not allowed")
return
# validation/input_validator.py
from typing import Any, Dict, List
import re
class InputValidator:
def __init__(self):
self.validation_rules = {
'email': r'^[a-zA-Z0-9._%+-]+@[a-zA-Z0-9.-]+\.[a-zA-Z]{2,}$',
'filename': r'^[a-zA-Z0-9._-]+$',
'url': r'^https?://[^\s/$.?#].[^\s]*$'
}
def validate_parameters(self, params: Dict[str, Any], schema: Dict[str, Any]) -> bool:
"""根据模式验证参数"""
for param_name, param_value in params.items():
if param_name not in schema:
raise ValidationError(f"Unknown parameter: {param_name}")
param_schema = schema[param_name]
# 类型检查
if not isinstance(param_value, param_schema['type']):
raise ValidationError(f"Invalid type for {param_name}")
# 值范围检查
if 'min' param_schema param_value < param_schema[]:
ValidationError()
param_schema param_value > param_schema[]:
ValidationError()
param_schema:
pattern = .validation_rules.get(param_schema[], param_schema[])
re.(pattern, (param_value)):
ValidationError()
通过遵循这些最佳实践,开发者可以创建高质量、安全可靠的 Skills,为 AI Agent 提供强大而稳定的能力扩展。这些实践不仅提高了 Skills 的质量,也为整个系统的可维护性和可扩展性奠定了坚实的基础。
Skills 系统作为 AI Agent 的模块化能力扩展机制,通过标准化的配置格式和渐进式加载策略,有效解决了 AI 在特定领域的专业化需求。本文从介绍、原理、格式、实例到最佳实践,全面阐述了 Skills 系统的设计思想和实现方法。
通过学习和应用这些知识,开发者可以:
Skills 系统的成功在于其简洁而强大的设计理念:将复杂的专业知识标准化、模块化,让 AI Agent 能够快速获得专业领域的能力。随着 AI 技术的不断发展,Skills 系统将继续演进,为构建更加智能和专业的 AI 应用提供坚实的基础。

微信公众号「极客日志」,在微信中扫描左侧二维码关注。展示文案:极客日志 zeeklog
使用加密算法(如AES、TripleDES、Rabbit或RC4)加密和解密文本明文。 在线工具,加密/解密文本在线工具,online
生成新的随机RSA私钥和公钥pem证书。 在线工具,RSA密钥对生成器在线工具,online
基于 Mermaid.js 实时预览流程图、时序图等图表,支持源码编辑与即时渲染。 在线工具,Mermaid 预览与可视化编辑在线工具,online
解析常见 curl 参数并生成 fetch、axios、PHP curl 或 Python requests 示例代码。 在线工具,curl 转代码在线工具,online
将字符串编码和解码为其 Base64 格式表示形式即可。 在线工具,Base64 字符串编码/解码在线工具,online
将字符串、文件或图像转换为其 Base64 表示形式。 在线工具,Base64 文件转换器在线工具,online