GraphQL在Python中的完整实现:从基础到企业级实战
GraphQL在Python中的实现方案涵盖Schema设计、Resolver解析机制及Strawberry与Graphene框架对比。通过项目架构设计、性能优化、Django集成及故障排查指南,提供从入门到企业级的完整解决方案。内容包含类型系统原理、异步支持、数据加载器优化及查询复杂度控制,帮助开发者构建高效灵活的API系统。

GraphQL在Python中的实现方案涵盖Schema设计、Resolver解析机制及Strawberry与Graphene框架对比。通过项目架构设计、性能优化、Django集成及故障排查指南,提供从入门到企业级的完整解决方案。内容包含类型系统原理、异步支持、数据加载器优化及查询复杂度控制,帮助开发者构建高效灵活的API系统。

GraphQL作为一种API查询语言,解决了传统REST架构的多个痛点。
# graphql_core_value.py
class GraphQLValueProposition:
"""GraphQL核心价值演示"""
def demonstrate_advantages(self):
"""展示GraphQL相比REST的优势"""
rest_vs_graphql = {
'over_fetching': {
'rest': '返回固定数据结构,包含客户端不需要的字段',
'graphql': '客户端精确指定所需字段,避免数据冗余'
},
'under_fetching': {
'rest': '需要多个请求获取完整数据',
'graphql': '单个请求获取所有相关数据'
},
'versioning': {
'rest': '需要版本管理(v1、v2)',
'graphql': '通过Schema演进避免版本断裂'
},
'documentation': {
'rest': '依赖外部文档,容易过时',
'graphql': '内置类型系统,自描述API'
}
}
print("=== GraphQL核心优势 ===")
for aspect, comparison in rest_vs_graphql.items():
print(f"{aspect}:")
print(f" REST: {comparison['rest']}")
print(f" GraphQL: {comparison['graphql']}")
return rest_vs_graphql
这种演进背后的技术驱动因素:
GraphQL的Schema是整个API的契约,定义了可查询的数据结构和操作。
# schema_design.py
from typing import List, Optional
from dataclasses import dataclass
@dataclass
class GraphQLType:
"""GraphQL类型定义基类"""
name: str
description: Optional[str] = None
fields: List['GraphQLField'] = None
def __post_init__(self):
if self.fields is None:
self.fields = []
@dataclass
class GraphQLField:
"""GraphQL字段定义"""
name: str
type: str
required: bool = False
description: Optional[str] = None
args: List['GraphQLArgument'] = None
def __post_init__(self):
if self.args is None:
self.args = []
@dataclass
class GraphQLArgument:
"""GraphQL参数定义"""
name:
:
required: =
default_value: [] =
:
():
.types = {}
.queries = {}
.mutations = {}
():
type_def = GraphQLType(name, description, fields)
.types[name] = type_def
type_def
():
field = GraphQLField(name, return_type, args=args)
.queries[name] = field
field
():
field = GraphQLField(name, return_type, args=args)
.mutations[name] = field
field
() -> :
sdl_lines = []
type_name, type_def .types.items():
sdl_lines.append()
field type_def.fields:
field_line =
field.args:
args_str = .join(
arg field.args
)
field_line +=
field_line +=
field.description:
field_line +=
sdl_lines.append(field_line)
sdl_lines.append()
.queries:
sdl_lines.append()
query_name, query_field .queries.items():
field_line =
query_field.args:
args_str = .join(
arg query_field.args
)
field_line +=
field_line +=
sdl_lines.append(field_line)
sdl_lines.append()
.mutations:
sdl_lines.append()
mutation_name, mutation_field .mutations.items():
field_line =
mutation_field.args:
args_str = .join(
arg mutation_field.args
)
field_line +=
field_line +=
sdl_lines.append(field_line)
sdl_lines.append()
.join(sdl_lines)
():
designer = SchemaDesigner()
user_fields = [
GraphQLField(, , , ),
GraphQLField(, , , ),
GraphQLField(, , , ),
GraphQLField(, , , )
]
designer.add_object_type(, user_fields, )
user_query_args = [GraphQLArgument(, , , )]
designer.add_query(, , user_query_args)
create_user_args = [
GraphQLArgument(, , , ),
GraphQLArgument(, , , )
]
designer.add_mutation(, , create_user_args)
sdl = designer.generate_sdl()
()
(sdl)
sdl
GraphQL类型系统的关键特性:
Resolver是GraphQL的数据处理核心,负责将查询字段映射到实际数据源。
# resolver_mechanism.py
from typing import Any, Dict, List, Optional
import asyncio
from dataclasses import dataclass
@dataclass
class ExecutionContext:
"""GraphQL执行上下文"""
query: str
variables: Dict[str, Any]
operation_name: Optional[str]
context_value: Any
field_nodes: List[Any]
return_type: Any
parent_type: Any
path: List[str]
schema: Any
class ResolverEngine:
"""Resolver执行引擎"""
def __init__(self):
self.resolvers = {}
self.dataloaders = {}
def register_resolver(self, type_name: str, field_name: str, resolver_func):
"""注册Resolver函数"""
key = f"{type_name}.{field_name}"
self.resolvers[key] = resolver_func
async def execute_query(self, schema, query: str, variables: = , operation_name: = , context: = ):
document = .parse_document(query)
validation_errors = .validate_query(schema, document)
validation_errors:
{: validation_errors}
result = .execute_document(schema, document, variables, operation_name, context)
result
() -> :
{: , : query}
() -> []:
errors = []
errors
() -> :
operation = .select_operation(document, operation_name)
exec_context = ExecutionContext(
query=document,
variables=variables {},
operation_name=operation_name,
context_value=context,
field_nodes=[],
return_type=,
parent_type=,
path=[],
schema=schema
)
data = .execute_operation(operation, exec_context)
{: data}
() -> :
operation[] == :
.execute_query_operation(operation, context)
operation[] == :
.execute_mutation_operation(operation, context)
:
ValueError()
() -> :
root_resolver = .resolvers.get()
root_resolver:
{}
result = .resolve_field(, , root_resolver, context)
result
() -> :
:
asyncio.iscoroutinefunction(resolver_func):
result = resolver_func(, context)
:
result = resolver_func(, context)
result
Exception e:
():
:
():
.batch_load_fn = batch_load_fn
.cache = {}
.queue = []
():
key .cache:
.cache[key]
.queue.append(key)
future = asyncio.Future()
.schedule_batch_load()
future
():
(, ):
._batch_scheduled =
():
asyncio.sleep()
keys = .queue
.queue = []
:
results = .batch_load_fn(keys)
key, result (keys, results):
key .cache:
.cache[key].set_result(result)
Exception e:
key keys:
key .cache:
.cache[key].set_exception(e)
._batch_scheduled =
asyncio.create_task(run_batch_load())
SimpleDataLoader(batch_load_fn)
对两大GraphQL框架进行全方位对比分析。
# framework_comparison.py
from typing import Type, Dict, Any, List
from dataclasses import dataclass
from enum import Enum
class FrameworkType(Enum):
STRAWBERRY = "strawberry"
GRAPHENE = "graphene"
@dataclass
class FrameworkFeature:
name: str
strawberry_support: bool
graphene_support: bool
description: str
@dataclass
class PerformanceMetrics:
framework: FrameworkType
request_throughput: int
average_latency: float
memory_usage: int
class FrameworkComparator:
def __init__(self):
self.features = self._initialize_features()
self.performance_data = self._initialize_performance_data()
def _initialize_features(self) -> List[FrameworkFeature]:
return [
FrameworkFeature("类型安全", True, False, "编译时类型检查"),
FrameworkFeature("异步支持", True, , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , ),
FrameworkFeature(, , , )
]
() -> [PerformanceMetrics]:
[
PerformanceMetrics(FrameworkType.STRAWBERRY, , , ),
PerformanceMetrics(FrameworkType.GRAPHENE, , , )
]
() -> [, ]:
feature_support = {}
feature .features:
feature_support[feature.name] = {
: feature.strawberry_support,
: feature.graphene_support,
: feature.description
}
performance_comparison = {}
metrics .performance_data:
performance_comparison[metrics.framework.value] = {
: metrics.request_throughput,
: metrics.average_latency,
: metrics.memory_usage
}
recommendation = ._generate_recommendation()
{
: feature_support,
: performance_comparison,
: recommendation
}
() -> [, ]:
strawberry_score =
graphene_score =
feature .features:
feature.strawberry_support:
strawberry_score +=
feature.graphene_support:
graphene_score +=
strawberry_perf = (m m .performance_data m.framework == FrameworkType.STRAWBERRY)
graphene_perf = (m m .performance_data m.framework == FrameworkType.GRAPHENE)
strawberry_score += strawberry_perf.request_throughput /
graphene_score += graphene_perf.request_throughput /
recommendations = {
: strawberry_score > graphene_score ,
: ,
: ,
: ,
:
}
{
: strawberry_score,
: graphene_score,
: recommendations
}
使用Strawberry框架实现类型安全、高性能的GraphQL API。
# strawberry_implementation.py
import strawberry
from typing import List, Optional, Annotated
from datetime import datetime
import asyncio
from dataclasses import dataclass
@strawberry.type(description="用户类型")
class User:
id: strawberry.ID
username: str
email: str
created_at: datetime
is_active: bool = True
@strawberry.field(description="获取用户资料")
def profile(self) -> 'UserProfile':
return UserProfile(bio=f"{self.username}的个人简介")
@strawberry.field(description="获取用户文章")
async def posts(self, first: int = 10) -> List['Post']:
await asyncio.sleep(0.01)
return [
Post(
id=strawberry.ID(str(i)),
title=f"{self.username}的文章{i}",
content="文章内容...",
author=
) i ((first, ))
]
:
bio:
avatar_url: [] =
:
: strawberry.ID
title:
content:
author: User
created_at: datetime = strawberry.field(default_factory=datetime.now)
() -> []:
asyncio.sleep()
[
Comment(
=strawberry.ID((i)),
content=,
author=User(
=strawberry.ID(),
username=,
email=,
created_at=datetime.now()
)
) i ()
]
:
: strawberry.ID
content:
author: User
:
username:
email:
password:
:
username: [] =
email: [] =
is_active: [] =
:
() -> [User]:
asyncio.sleep()
() == :
User(
=,
username=,
email=,
created_at=datetime.now()
)
() -> [User]:
asyncio.sleep()
[
User(
=strawberry.ID((i)),
username=,
email=,
created_at=datetime.now()
) i (skip, skip + (limit, ))
]
() -> [User]:
asyncio.sleep()
[
User(
=strawberry.ID(),
username=query,
email=,
created_at=datetime.now()
)
]
:
() -> User:
asyncio.sleep()
User(
=strawberry.ID(),
username=.username,
email=.email,
created_at=datetime.now()
)
() -> [User]:
asyncio.sleep()
User(
=,
username=.username ,
email=.email ,
created_at=datetime.now()
)
() -> :
asyncio.sleep()
schema = strawberry.Schema(
query=Query,
mutation=Mutation,
config=strawberry.StrawberryConfig(
auto_camel_case=,
require_graphql=
)
)
fastapi FastAPI
strawberry.fastapi
app = FastAPI(title=, description=)
():
{: }
graphql_app = strawberry.fastapi.GraphQLRouter(schema)
app.include_router(graphql_app, prefix=)
__name__ == :
uvicorn
uvicorn.run(app, host=, port=)
# performance_optimization.py
import time
import asyncio
from functools import wraps
from typing import Any, Dict, List
from dataclasses import dataclass
from concurrent.futures import ThreadPoolExecutor
@dataclass
class CacheEntry:
value: Any
timestamp: float
ttl: float
class PerformanceOptimizer:
def __init__(self):
self.cache: Dict[str, CacheEntry] = {}
self.query_complexity_limits = {
'max_depth': 10,
'max_complexity': 1000,
'max_aliases': 10
}
self.thread_pool = ThreadPoolExecutor(max_workers=10)
def cache_decorator(self, ttl: float = 300):
def decorator(func):
@wraps(func)
async def wrapper(*args, **kwargs):
cache_key =
cache_key .cache:
entry = .cache[cache_key]
time.time() - entry.timestamp < entry.ttl:
entry.value
result = func(*args, **kwargs)
.cache[cache_key] = CacheEntry(result, time.time(), ttl)
result
wrapper
decorator
() -> [, ]:
analysis = {: , : , : , : }
lines = query.strip().split()
line lines:
line = line.strip()
line line.startswith():
depth = (line) - (line.lstrip())
analysis[] = (analysis[], depth // )
line line line:
analysis[] +=
analysis[] +=
line line:
analysis[] +=
analysis
() -> :
analysis = .complexity_analyzer(query)
analysis[] > .query_complexity_limits[]:
analysis[] > .query_complexity_limits[]:
analysis[] > .query_complexity_limits[]:
() -> []:
unique_keys = ((keys))
results = resolver_func(unique_keys)
result_map = ((unique_keys, results))
[result_map[key] key keys]
():
:
():
.batch_load_fn = batch_load_fn
.cache = {}
.queue = []
.batch_scheduled =
():
key .cache:
.cache[key]
future = asyncio.Future()
.cache[key] = future
.queue.append((key, future))
.batch_scheduled:
.batch_scheduled =
asyncio.create_task(.dispatch_batch())
future
():
asyncio.sleep()
.queue:
.batch_scheduled =
queue = .queue
.queue = []
.batch_scheduled =
keys = [item[] item queue]
futures = [item[] item queue]
:
results = .batch_load_fn(keys)
future, result (futures, results):
future.done():
future.set_result(result)
Exception e:
future futures:
future.done():
future.set_exception(e)
SimpleDataLoader(batch_load_fn)
optimizer = PerformanceOptimizer()
() -> [, ]:
asyncio.sleep()
{: key, : }
() -> []:
asyncio.sleep()
[{: key, : } key keys]
user_loader = optimizer.create_dataloader(batch_user_loader)
针对Django项目的Graphene集成方案,提供完整的CRUD操作实现。
# graphene_django_integration.py
import graphene
from graphene_django import DjangoObjectType
from graphene_django.filter import DjangoFilterConnectionField
from graphene import relay
from django.db import models
from django.contrib.auth.models import User as AuthUser
from typing import Optional
class Category(models.Model):
name = models.CharField(max_length=100)
description = models.TextField(blank=True)
created_at = models.DateTimeField(auto_now_add=True)
class Meta:
verbose_name_plural = "Categories"
def __str__(self):
return self.name
class Article(models.Model):
title = models.CharField(max_length=200)
content = models.TextField()
category = models.ForeignKey(Category, on_delete=models.CASCADE, related_name='articles')
author = models.ForeignKey(AuthUser, on_delete=models.CASCADE)
published = models.BooleanField(default=False)
created_at = models.DateTimeField(auto_now_add=True)
updated_at = models.DateTimeField(auto_now=True)
def __str__(self):
return self.title
class CategoryType():
article_count = graphene.Int(description=)
:
model = Category
interfaces = (relay.Node,)
filter_fields = {
: [, , ],
: [, ]
}
():
.articles.count()
():
excerpt = graphene.String(length=graphene.Int(default_value=))
:
model = Article
interfaces = (relay.Node,)
filter_fields = {
: [, ],
: [],
: [],
: [],
: [, ]
}
():
.content[:length] + (.content) > length .content
(graphene.InputObjectType):
name = graphene.String(required=)
description = graphene.String()
(graphene.InputObjectType):
title = graphene.String(required=)
content = graphene.String(required=)
category_id = graphene.ID(required=)
published = graphene.Boolean()
(graphene.ObjectType):
category = graphene.Field(CategoryType, =graphene.ID(required=))
all_categories = DjangoFilterConnectionField(CategoryType)
article = graphene.Field(ArticleType, =graphene.ID(required=))
all_articles = DjangoFilterConnectionField(ArticleType)
published_articles = DjangoFilterConnectionField(ArticleType, category_name=graphene.String())
():
Category.objects.get(=)
():
Category.objects.()
():
Article.objects.get(=)
():
Article.objects.()
():
queryset = Article.objects.(published=)
category_name:
queryset = queryset.(category__name=category_name)
queryset
(graphene.Mutation):
:
= CategoryInput(required=)
category = graphene.Field(CategoryType)
():
category = Category.objects.create(
name=.name,
description=.description
)
CreateCategory(category=category)
(graphene.Mutation):
:
= graphene.ID(required=)
= CategoryInput(required=)
category = graphene.Field(CategoryType)
():
category = Category.objects.get(=)
category.name = .name
.description :
category.description = .description
category.save()
UpdateCategory(category=category)
(graphene.Mutation):
:
= ArticleInput(required=)
article = graphene.Field(ArticleType)
():
user = info.context.user
user.is_authenticated:
Exception()
article = Article.objects.create(
title=.title,
content=.content,
category_id=.category_id,
author=user,
published=.published
)
CreateArticle(article=article)
(graphene.ObjectType):
create_category = CreateCategory.Field()
update_category = UpdateCategory.Field()
create_article = CreateArticle.Field()
schema = graphene.Schema(query=Query, mutation=Mutation)
django.urls path
graphene_django.views GraphQLView
django.views.decorators.csrf csrf_exempt
urlpatterns = [
path(, csrf_exempt(GraphQLView.as_view(graphiql=, schema=schema))),
]
:
():
info.operation.operation == info.context.user.is_authenticated:
Exception()
(root, info, **args)
schema.middleware = [AuthorizationMiddleware()]
基于真实项目经验,构建完整的GraphQL性能监控体系。
# performance_monitoring.py
import time
import statistics
from datetime import datetime
from functools import wraps
from typing import Dict, List, Any, Optional
import logging
from dataclasses import dataclass
@dataclass
class QueryMetrics:
query: str
duration: float
complexity: int
field_count: int
timestamp: datetime
success: bool
error: Optional[str] = None
class GraphQLMonitor:
def __init__(self):
self.metrics: List[QueryMetrics] = []
self.logger = self.setup_logging()
def setup_logging(self):
logging.basicConfig(
level=logging.INFO,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
handlers=[
logging.FileHandler('graphql_performance.log'),
logging.StreamHandler()
]
)
return logging.getLogger(__name__)
def track_performance(self, func):
@wraps()
():
start_time = time.time()
query = kwargs.get(, ) (args[] (args) > )
:
result = func(*args, **kwargs)
duration = time.time() - start_time
complexity = .calculate_complexity(query)
field_count = .count_fields(query)
metrics = QueryMetrics(
query=query[:],
duration=duration,
complexity=complexity,
field_count=field_count,
timestamp=datetime.now(),
success=
)
.metrics.append(metrics)
duration > :
.logger.warning()
result
Exception e:
duration = time.time() - start_time
metrics = QueryMetrics(
query=query[:],
duration=duration,
complexity=,
field_count=,
timestamp=datetime.now(),
success=,
error=(e)
)
.metrics.append(metrics)
.logger.error()
wrapper
() -> :
query:
complexity =
in_field =
char query:
char == :
complexity +=
char == :
complexity = (, complexity - )
char.isalpha() in_field:
complexity +=
in_field =
char.isspace():
in_field =
complexity
() -> :
query:
lines = query.split()
field_count =
line lines:
line = line.strip()
line line.startswith((, , )):
field_count +=
field_count
() -> [, ]:
.metrics:
{: }
successful_metrics = [m m .metrics m.success]
failed_metrics = [m m .metrics m.success]
successful_metrics:
durations = [m.duration m successful_metrics]
complexities = [m.complexity m successful_metrics]
field_counts = [m.field_count m successful_metrics]
report = {
: (.metrics),
: (successful_metrics),
: (failed_metrics),
: (successful_metrics) / (.metrics),
: {
: statistics.mean(durations),
: (durations)[((durations) * )],
: (durations),
: statistics.mean(complexities),
: statistics.mean(field_counts)
},
: [
{: m.query, : m.duration}
m (successful_metrics, key= x: x.duration, reverse=)[:]
]
}
:
report = {
: (.metrics),
: ,
: (failed_metrics),
: ,
: ,
: []
}
report
() -> [, ]:
window_start = datetime.now().timestamp() - time_window
recent_metrics = [m m .metrics m.timestamp.timestamp() > window_start]
query_patterns = {}
metric recent_metrics:
pattern = .identify_query_pattern(metric.query)
pattern query_patterns:
query_patterns[pattern] = []
query_patterns[pattern].append(metric)
pattern_analysis = {}
pattern, metrics query_patterns.items():
durations = [m.duration m metrics m.success]
durations:
pattern_analysis[pattern] = {
: (metrics),
: statistics.mean(durations),
: ([m m metrics m.success]) / (metrics)
}
{
: time_window,
: (recent_metrics),
: pattern_analysis,
: .generate_optimization_recommendations(pattern_analysis)
}
() -> :
query.lower():
query.lower():
query.lower() query.lower():
query.lower():
:
:
() -> []:
recommendations = []
pattern, analysis pattern_analysis.items():
analysis[] > :
recommendations.append()
analysis[] < :
recommendations.append()
recommendations
monitor = GraphQLMonitor()
():
asyncio.sleep()
{: {: }}
():
test_queries = [
,
,
]
query test_queries:
:
execute_graphql_query(query)
Exception:
report = monitor.get_performance_report()
analytics = monitor.get_query_analytics()
{
: report,
: analytics
}
基于真实项目经验,总结GraphQL开发中的常见问题及解决方案。
# troubleshooting.py
import logging
from typing import Dict, List, Any, Optional
from graphql import GraphQLError
from graphql.type.schema import GraphQLSchema
class GraphQLTroubleshooter:
def __init__(self, schema: GraphQLSchema):
self.schema = schema
self.common_issues = self._initialize_issue_database()
def _initialize_issue_database(self) -> Dict[str, Dict]:
return {
'n_plus_one': {
'symptoms': ['查询性能随数据量线性下降', '数据库查询次数过多'],
'causes': ['缺少DataLoader批量加载', 'Resolver设计不合理'],
'solutions': ['实现DataLoader模式', '优化查询字段解析']
},
'schema_validation': {
'symptoms': ['Schema编译错误', '类型验证失败'],
'causes': ['类型定义冲突', '循环依赖', '字段重复定义'],
'solutions': ['检查类型定义', '解决循环依赖', '使用Schema验证工具']
},
: {
: [, ],
: [, ],
: [, ]
},
: {
: [, ],
: [, , ],
: [, , ]
}
}
() -> []:
error_message = (error)
symptoms = ._identify_symptoms(error_message, context)
possible_issues = []
issue_name, issue_info .common_issues.items():
(symptom symptoms symptom issue_info[]):
possible_issues.append(issue_name)
recommendations = []
issue possible_issues:
recommendations.extend(.common_issues[issue][])
recommendations recommendations []
() -> []:
symptoms = []
error_lower = error_message.lower()
error_lower error_lower:
symptoms.append()
error_lower error_lower:
symptoms.append()
error_lower error_lower:
symptoms.append()
error_lower error_lower:
symptoms.append()
context.get(, ) > :
symptoms.append()
context.get(, ) > :
symptoms.append()
symptoms
() -> [, ]:
type_map = .schema.type_map
debug_info = {
: (type_map),
: (.schema.query_type) .schema.query_type ,
: (.schema.mutation_type) .schema.mutation_type ,
: (.schema.subscription_type) .schema.subscription_type ,
: (.schema.directives),
: {}
}
type_name, graphql_type type_map.items():
type_name.startswith():
type_info = {
: graphql_type.__class__.__name__,
: (graphql_type, , )
}
(graphql_type, ):
type_info[] = (graphql_type.fields)
type_info[] = (graphql_type.fields.keys())
debug_info[][type_name] = type_info
debug_info
() -> [, ]:
complexity = ._calculate_query_complexity(query)
depth = ._calculate_query_depth(query)
issues = []
complexity > max_complexity:
issues.append()
depth > :
issues.append()
{
: complexity,
: depth,
: (issues) == ,
: issues,
: [
,
,
] issues []
}
() -> :
(query.replace(, ).replace(, ))
() -> :
depth =
max_depth =
char query:
char == :
depth +=
max_depth = (max_depth, depth)
char == :
depth -=
max_depth
():
troubleshooter = GraphQLTroubleshooter(schema)
debug_info = troubleshooter.generate_debug_schema()
sample_query =
complexity_check = troubleshooter.validate_query_complexity(sample_query)
{
: debug_info,
: complexity_check
}
本文提供了完整的GraphQL在Python中的实现技术路径。GraphQL作为现代API开发的重要技术,正在改变我们设计和构建API的方式。希望本文能帮助您在未来的项目中构建更高效、更灵活的API系统。

微信公众号「极客日志」,在微信中扫描左侧二维码关注。展示文案:极客日志 zeeklog
解析常见 curl 参数并生成 fetch、axios、PHP curl 或 Python requests 示例代码。 在线工具,curl 转代码在线工具,online
将字符串编码和解码为其 Base64 格式表示形式即可。 在线工具,Base64 字符串编码/解码在线工具,online
将字符串、文件或图像转换为其 Base64 表示形式。 在线工具,Base64 文件转换器在线工具,online
将 Markdown(GFM)转为 HTML 片段,浏览器内 marked 解析;与 HTML转Markdown 互为补充。 在线工具,Markdown转HTML在线工具,online
将 HTML 片段转为 GitHub Flavored Markdown,支持标题、列表、链接、代码块与表格等;浏览器内处理,可链接预填。 在线工具,HTML转Markdown在线工具,online
通过删除不必要的空白来缩小和压缩JSON。 在线工具,JSON 压缩在线工具,online