Ubuntu22.04 安装与配置 OpenClaw AI 助手教程
介绍在 Ubuntu 22.04 系统上安装和配置 OpenClaw AI 助手的步骤。内容包括一键安装脚本、模型接入(DeepSeek、GLM、MiniMax 等)、通讯渠道配置(飞书、钉钉)以及 Web 搜索功能(Exa)的设置。通过配置多模型切换和工具调用,实现 AI 辅助编程与任务自动化。

介绍在 Ubuntu 22.04 系统上安装和配置 OpenClaw AI 助手的步骤。内容包括一键安装脚本、模型接入(DeepSeek、GLM、MiniMax 等)、通讯渠道配置(飞书、钉钉)以及 Web 搜索功能(Exa)的设置。通过配置多模型切换和工具调用,实现 AI 辅助编程与任务自动化。

本项目旨在通过安装 OpenClaw,利用 AI 辅助实现 Gnuradio 移植、FPGA 程序编写、射频及 ARM Linux 编程。
请参考官网获取最新信息:https://clawd.org.cn/
curl -fsSL https://clawd.org.cn/install.sh | sudo bash
输入 Y 确认安装。
需要配置 DeepSeek API Key。访问 https://platform.deepseek.com/sign_in 注册并获取 API Key,在安装终端中输入。
需创建飞书机器人。访问 https://open.feishu.cn 进入开发者后台,注册用户并创建 AI 机器人。
创建成功后记录 App ID 和 App Secret。在后续安装终端中填入 App ID 和 Secret。
也可使用命令单独配置:
openclaw-cn configure --section channels
如有问题,运行 openclaw-cn onboard --install-daemon 重新配置。
若 Gateway 未安装导致无法打开网页端,执行以下命令安装必备工具:
sudo apt install net-tools
配置 GLM-4.7-Flash 官方免费 API。访问 https://bigmodel.cn/ 注册开发者并获取 API Key。
在控制台新建 API Key 后复制,写入 openclaw.json 文件:
{
"models": {
"providers": {
"glm": {
"baseUrl": "https://open.bigmodel.cn/api/paas/v4",
"apiKey": "你的 apiKey",
"api": "openai-completions",
"models": [
{
"id": "glm-4.7-flash",
"name": "GLM-4.7 Flash",
"contextWindow": 128000,
"maxTokens": 4096,
"reasoning": false,
"input": ["text"],
"cost": {
重启 OpenClaw:
openclaw-cn gateway restart
访问 https://open-dev.dingtalk.com 在钉钉开发者平台获取 Client ID (AppKey) 和 Client Secret (AppSecret)。
在'权限管理'中添加权限:
安装钉钉插件(默认未内置):
openclaw-cn plugins install https://github.com/soimy/clawdbot-channel-dingtalk.git
修改 openclaw.json 配置文件加入 DeepSeek API Key,当 GLM 免费 Token 不足时自动切换。
重启服务:
openclaw-cn gateway restart
之后配置 openclaw.json 即可接入多个大模型。
访问 https://exa.ai/ 免费注册并获取 API Key。
将 API Key 告知 OpenClaw 进行配置:
export EXA_API_KEY="YOUR_API_KEY"
或在 .env 文件中添加:
EXA_API_KEY=YOUR_API_KEY
重启服务:
openclaw-cn gateway restart
EXA_API_KEY=YOUR_API_KEY
为 OpenAI Codex 提供实时 Web 搜索、代码上下文和公司研究功能。
运行命令:
codex mcp add exa --url https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY
启用特定工具:
https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY&tools=web_search_exa,get_code_context_exa,people_search_exa
启用所有工具:
https://mcp.exa.ai/mcp?exaApiKey=YOUR_API_KEY&tools=web_search_exa,web_search_advanced_exa,get_code_context_exa,crawling_exa,company_research_exa,people_search_exa,deep_researcher_start,deep_researcher_check
故障排除: 如果工具未显示,更新配置后重启 MCP 客户端。
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "latest developments in AI safety research", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'
函数调用允许 AI 代理根据对话上下文动态决定何时搜索网络。
import json
from openai import OpenAI
from exa_py import Exa
openai = OpenAI()
exa = Exa()
tools = [{"type":"function","function":{"name":"exa_search","description":"Search the web for current information.","parameters":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}}]
def exa_search(query:str)->str:
results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})
return "\n".join([f"{r.title}: {r.url}" for r in results.results])
messages = [{"role":"user","content":"What's the latest in AI safety?"}]
response = openai.chat.completions.create(model="gpt-4o", messages=messages, tools=tools)
if response.choices[0].message.tool_calls:
tool_call = response.choices[0].message.tool_calls[0]
search_results = exa_search(json.loads(tool_call.function.arguments)[])
messages.append(response.choices[].message)
messages.append({:,: tool_call.,: search_results})
final = openai.chat.completions.create(model=, messages=messages)
(final.choices[].message.content)
import anthropic
from exa_py import Exa
client = anthropic.Anthropic()
exa = Exa()
tools = [{"name":"exa_search","description":"Search the web for current information.","input_schema":{"type":"object","properties":{"query":{"type":"string","description":"Search query"}},"required":["query"]}}]
def exa_search(query:str)->str:
results = exa.search_and_contents(query,type="auto", num_results=10, text={"max_characters":20000})
return "\n".join([f"{r.title}: {r.url}" for r in results.results])
messages = [{"role":"user","content":"Latest quantum computing developments?"}]
response = client.messages.create(model="claude-sonnet-4-20250514", max_tokens=4096, tools=tools, messages=messages)
if response.stop_reason == "tool_use":
tool_use = next(b for b in response.content if b.type=="tool_use")
tool_result = exa_search(tool_use.[])
messages.append({:,: response.content})
messages.append({:,:[{:,: tool_use.,: tool_result}]})
final = client.messages.create(model=, max_tokens=, tools=tools, messages=messages)
(final.content[].text)
| Type | Best For | Speed | Depth |
|---|---|---|---|
fast | Real-time apps, autocomplete, quick lookups | Fastest | Basic |
auto | Most queries - balanced relevance & speed | Medium | Smart |
deep | Research, enrichment, thorough results | Slow | Deep |
deep-reasoning | Complex research, multi-step reasoning | Slowest | Deepest |
Tip: type="auto" works well for most queries. Use type="deep" when you need thorough research results.
Choose ONE content type per request:
| Type | Config | Best For |
|---|---|---|
| Text | "text": {"max_characters": 20000} | Full content extraction, RAG |
| Highlights | "highlights": {"max_characters": 4000} | Snippets, summaries, lower cost |
Warning: Using text: true can significantly increase token count. Add max_characters limit or use highlights.
Usually not needed. Example:
{"includeDomains":["arxiv.org","github.com"],"excludeDomains":["pinterest.com"]}
{"query":"latest developments in AI safety research","num_results":10,"contents":{"text":{"max_characters":20000}}}
Tips:
type: "auto" for most queriesUse category filters to search dedicated indexes.
category: "people")Find people by role, expertise, or what they work on.
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "software engineer distributed systems", "category": "people", "type": "auto", "num_results": 10 }'
Tips: Use SINGULAR form. Describe what they work on.
category: "company")Find companies by industry, criteria, or attributes.
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "AI startup healthcare", "category": "company", "type": "auto", "num_results": 10 }'
Tips: Use SINGULAR form. Simple entity queries.
category: "news")News articles.
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "OpenAI announcements", "category": "news", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'
Tips: Use livecrawl: "preferred" for breaking news.
category: "research paper")Academic papers.
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "transformer architecture improvements", "category": "research paper", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'
Tips: Includes arxiv.org, paperswithcode.com.
category: "tweet")Twitter/X posts.
curl -X POST 'https://api.exa.ai/search' \
-H 'x-api-key: YOUR_API_KEY' \
-H 'Content-Type: application/json' \
-d '{ "query": "AI safety discussion", "category": "tweet", "type": "auto", "num_results": 10, "contents": { "text": { "max_characters": 20000 } } }'
Tips: Good for real-time discussions.
maxAgeHours sets the maximum acceptable age for cached content.
| Value | Behavior | Best For |
|---|---|---|
| 24 | Use cache if less than 24 hours old, otherwise livecrawl | Daily-fresh content |
| 1 | Use cache if less than 1 hour old, otherwise livecrawl | Near real-time data |
| 0 | Always livecrawl (ignore cache entirely) | Real-time data where cached content is unusable |
| -1 | Never livecrawl (cache only) | Maximum speed, historical/static content |
| (omit) | Default behavior | Recommended — balanced speed and freshness |
Beyond /search, Exa offers these endpoints:
| Endpoint | Description | Docs |
|---|---|---|
/contents | Get contents for known URLs | Docs |
/answer | Q&A with citations from web search | Docs |
Example - Get contents for URLs:
POST /contents
{"urls":["https://example.com/article"],"text":{"max_characters":20000}}
Results not relevant?
type: "auto"type: "deep"Need structured data from search?
type: "deep" or type: "deep-reasoning" with outputSchemaResults too slow?
type: "fast"num_resultsNo results?
type: "auto"
微信公众号「极客日志」,在微信中扫描左侧二维码关注。展示文案:极客日志 zeeklog
生成新的随机RSA私钥和公钥pem证书。 在线工具,RSA密钥对生成器在线工具,online
基于 Mermaid.js 实时预览流程图、时序图等图表,支持源码编辑与即时渲染。 在线工具,Mermaid 预览与可视化编辑在线工具,online
将字符串编码和解码为其 Base64 格式表示形式即可。 在线工具,Base64 字符串编码/解码在线工具,online
将字符串、文件或图像转换为其 Base64 表示形式。 在线工具,Base64 文件转换器在线工具,online
将 Markdown(GFM)转为 HTML 片段,浏览器内 marked 解析;与 HTML转Markdown 互为补充。 在线工具,Markdown转HTML在线工具,online
将 HTML 片段转为 GitHub Flavored Markdown,支持标题、列表、链接、代码块与表格等;浏览器内处理,可链接预填。 在线工具,HTML转Markdown在线工具,online