ubuntu 22.04 部署 ollama + deepseek + open webui

1. 环境:以下 kvm 虚拟机

系统CPU内存GPU
Ubuntu 22.0464 core512GBv100 * 3

2. 安装 V100 驱动

apt update aptinstall-y software-properties-common 
驱动包资源
add-apt-repository ppa:graphics-drivers/ppa -yaptinstall ubuntu-drivers-common 
查看可以安装的版本
ubuntu-drivers devices 
删除已经安装的驱动
apt-get remove --purge'^nvidia-.*'
自动安装最新版本
ubuntu-drivers install
或安装指定版本
aptinstall nvidia-driver-565 
重启
reboot
查看 GPU 信息
nvidia-smi Wed Feb 12 09:39:33 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7||-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC || Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |||| MIG M. ||=========================================+========================+======================||0 Tesla V100-PCIE-16GB-LS On | 00000000:00:07.0 Off |0|| N/A 36C P0 24W / 250W | 4MiB / 16384MiB |0% Default |||| N/A | +-----------------------------------------+------------------------+----------------------+ |1 Tesla V100-PCIE-16GB-LS On | 00000000:00:08.0 Off |0|| N/A 38C P0 24W / 250W | 4MiB / 16384MiB |0% Default |||| N/A | +-----------------------------------------+------------------------+----------------------+ |2 Tesla V100-PCIE-16GB-LS On | 00000000:00:09.0 Off |0|| N/A 36C P0 26W / 250W | 4MiB / 16384MiB |0% Default |||| N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: || GPU GI CI PID Type Process name GPU Memory || ID ID Usage ||=========================================================================================|| No running processes found | +-----------------------------------------------------------------------------------------+ 

3. 安装 CUDA

下载 CUDA 软件包源
wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2404/x86_64/cuda-keyring_1.1-1_all.deb 
加载资源包
dpkg -i cuda-keyring_1.1-1_all.deb 
查看 CUDA 版本
apt policy cuda-toolkit 
安装 CUDA
aptinstall cuda-toolkit 
配置 CUDA 环境变量
exportCUDA_HOME=/usr/local/cuda exportPATH=${CUDA_HOME}/bin:${PATH}exportLD_LIBRARY_PATH=${CUDA_HOME}/lib64:$LD_LIBRARY_PATH
查看 CUDA 信息
nvcc --version 

4. 安装 Ollama

安装命令
curl-fsSL https://ollama.com/install.sh |sh
安装完成后查看 Ollama 状态
service ollama status 

日志错误信息如下
Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.416+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.417+08:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v11_avx Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.417+08:00 level=INFO source=common.go:131 msg="GPU runner incompatible with host system, CPU does not have AVX" runner=cuda_v12_avx Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.417+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 rocm_avx cpu cpu_avx]" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.417+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.550+08:00 level=INFO source=gpu.go:283 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.553+08:00 level=INFO source=gpu.go:283 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.557+08:00 level=INFO source=gpu.go:283 msg="error looking up nvidia GPU memory" error="cuda driver library failed to get device context 801" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.558+08:00 level=INFO source=gpu.go:392 msg="no compatible GPUs were discovered" Feb 11 17:50:06 i-mvlzfacx ollama[6794]: time=2025-02-11T17:50:06.558+08:00 level=INFO source=types.go:131 msg="inference compute" id=0 library=cpu variant="no vector extensions" driver=0.0 total="503.7 GiB" available=> 

问题原因

GPU runner incompatible with host system, CPU does not have AVX 根据错误信息,虚拟机 VCPU 缺少 AVX 指令集,导致 GPU 不能使用。

查看 CPU 是否支持 AVX
lscpu |grep avx 

没有 AVX 信息。


5. 修改虚拟机 config.xml 配置

<cpu> 中添加如下内容:

<cpumode='custom'match='exact'check='full'><modelfallback='forbid'>Skylake-Server</model><topologysockets='4'cores='16'threads='1'/><featurepolicy='require'name='avx'/><featurepolicy='require'name='avx2'/><featurepolicy='require'name='hypervisor'/></cpu>

重新定义虚拟机,查看 AVX:

lscpu |grep avx 

查看输出:

Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch cpuid_fault invpcid_single pti fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm mpx avx512f avx512dq rdseed adx smap clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 arat 

6. 再次查看 Ollama 已经正常

查看 Ollama 服务状态:

service ollama status 

输出状态:

ollama.service - Ollama Service Loaded: loaded (/etc/systemd/system/ollama.service; enabled; vendor preset: enabled) Active: active (running) since Wed 2025-02-12 09:32:09 CST; 10min ago Main PID: 1529 (ollama) Tasks: 27 (limit: 618662) Memory: 8.1G CPU: 1min 21.889s CGroup: /system.slice/ollama.service └─1529 /usr/local/bin/ollama serve Feb 12 09:32:10 i-mvlzfacx ollama[1529]: [GIN-debug] HEAD /api/version --> github.com/ollama/ollama/server.(*Server).GenerateRoutes.func2 (5 handlers) Feb 12 09:32:10 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:10.875+08:00 level=INFO source=routes.go:1238 msg="Listening on 127.0.0.1:11434 (version 0.5.7)" Feb 12 09:32:10 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:10.885+08:00 level=INFO source=routes.go:1267 msg="Dynamic LLM libraries" runners="[cpu_avx2 cuda_v11_avx cuda_v12_avx rocm_avx cpu cpu_avx]" Feb 12 09:32:10 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:10.886+08:00 level=INFO source=gpu.go:226 msg="looking for compatible GPUs" Feb 12 09:32:12 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:12.464+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-745b3d31-7b14-6335-7ea8-d27ea7261802 library=cuda variant=v12 compute=7.0 driver=12.7 name="Te> Feb 12 09:32:12 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:12.464+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-bd0014a9-9fb8-ade2-6054-a721c20dbef1 library=cuda variant=v12 compute=7.0 driver=12.7 name="Te> Feb 12 09:32:12 i-mvlzfacx ollama[1529]: time=2025-02-12T09:32:12.464+08:00 level=INFO source=types.go:131 msg="inference compute" id=GPU-5cfd0bcc-c8c5-29ec-4f8d-630adb6d33b2 library=cuda variant=v12 compute=7.0 driver=12.7 name="Te> Feb 12 09:36:42 i-mvlzfacx ollama[1529]: [GIN] 2025/02/12 - 09:36:42 | 200 | 18.869142ms | 127.0.0.1 | HEAD "/" Feb 12 09:36:42 i-mvlzfacx ollama[1529]: [GIN] 2025/02/12 - 09:36:42 | 404 | 644.305µs | 127.0.0.1 | POST "/api/show" Feb 12 09:36:45 i-mvlzfacx ollama[1529]: time=2025-02-12T09:36:45.027+08:00 level=INFO source=download.go:175 msg="downloading 6e9f90f02bb3 in 16 561 MB part(s)" 

7. 使用 Ollama 下载 DeepSeek

运行命令:

# ollama run deepseek-r1:14b pulling manifest pulling 6e9f90f02bb3... 100% ▕███████████████████████████████████████████████████▏ 9.0 GB pulling 369ca498f347... 100% ▕███████████████████████████████████████████████████▏ 387 B pulling 6e4c38e1172f... 100% ▕███████████████████████████████████████████████████▏ 1.1 KB pulling f4d24e9138dd... 100% ▕███████████████████████████████████████████████████▏ 148 B pulling 3c24b0c80794... 100% ▕███████████████████████████████████████████████████▏ 488 B verifying sha256 digest writing manifest success >>>

8. 监控 GPU 信息

watch-n1 nvidia-smi 

输出显示:

Every 1.0s: nvidia-smi i-mvlzfacx: Wed Feb 12 09:56:13 2025 Wed Feb 12 09:56:13 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla V100-PCIE-16GB-LS On | 00000000:00:07.0 Off | 0 | | N/A 38C P0 38W / 250W | 10694MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 Tesla V100-PCIE-16GB-LS On | 00000000:00:08.0 Off | 0 | | N/A 37C P0 24W / 250W | 4MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla V100-PCIE-16GB-LS On | 00000000:00:09.0 Off | 0 | | N/A 35C P0 26W / 250W | 4MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 2151 C ...rs/cuda_v12_avx/ollama_llama_server 10690MiB | +-----------------------------------------------------------------------------------------+ 

此时发现只有一张 v100 在被使用


9. 环境变量中添加 CUDA_VISIBLE_DEVICES

exportCUDA_VISIBLE_DEVICES=0,1,2 

重启 Ollama:

service ollama restart 

再次运行 DeepSeek,并查看 GPU 监控,发现三张 GPU 都被使用了:

Every 1.0s: nvidia-smi i-mvlzfacx: Wed Feb 12 10:19:25 2025 Wed Feb 12 10:19:25 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 565.57.01 Driver Version: 565.57.01 CUDA Version: 12.7 | |-----------------------------------------+------------------------+----------------------| | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 Tesla V100-PCIE-16GB-LS On | 00000000:00:07.0 Off | 0 | | N/A 38C P0 38W / 250W | 14452MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 1 Tesla V100-PCIE-16GB-LS On | 00000000:00:08.0 Off | 0 | | N/A 39C P0 38W / 250W | 13804MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ | 2 Tesla V100-PCIE-16GB-LS On | 00000000:00:09.0 Off | 0 | | N/A 37C P0 38W / 250W | 14216MiB / 16384MiB | 0% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ +-----------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=========================================================================================| | 0 N/A N/A 6067 C ...rs/cuda_v12_avx/ollama_llama_server 14448MiB | | 1 N/A N/A 6067 C ...rs/cuda_v12_avx/ollama_llama_server 13800MiB | | 2 N/A N/A 6067 C ...rs/cuda_v12_avx/ollama_llama_server 14212MiB | +-----------------------------------------------------------------------------------------+ 

10. 安装 Open WebUI

环境安装:

Open WebUI 要求使用 Python 3.11。使用以下命令创建一个新的环境:

conda create --name open-webui python=3.11

进入环境:

conda activate open-webui 

使用 pip 安装 Open WebUI:

pip install open-webui 
启动服务:
RAG_EMBEDDING_MODEL="" ENABLE_OPENAI_API="false" CORS_ALLOW_ORIGIN="*" open-webui serve --host 0.0.0.0 --port 5000 
  • RAG_EMBEDDING_MODEL 不加载默认嵌入的模型。
  • ENABLE_OPENAI_API 禁止请求 OpenAI。
  • CORS_ALLOW_ORIGIN 开启跨域请求。
上传文件配置:

修改内容如下:

Google Chrome 2025-02-26 12.32.28.png

上传后,文件一直转圈,如下图。后台查看 GPU 监控和 Ollama 进程都是正常的。等待一会儿后,可以继续提交内容。应该是模型在进行推理。

image.png

Read more

OpenClaw(龙虾)开源AI智能体科普解析:核心原理、功能特性与本地部署教程

OpenClaw(龙虾)开源AI智能体科普解析:核心原理、功能特性与本地部署教程

近期开源AI领域,OpenClaw(俗称“龙虾”)凭借其本地优先、可定制的特性,受到开发者社区的广泛关注,其项目保活程度与社区活跃度可通过GitHub数据直观体现:目前该项目已获得222k stars、1.2k watching、42.3k forks,各项数据均处于开源AI智能体领域前列,足以证明其社区认可度与持续更新能力。作为一款开源AI智能体工具,它在办公自动化、系统辅助等场景具有实用价值,适合开发者了解和落地实践。 OpenClaw是一款开源的个人AI助手编排平台,采用TypeScript开发,目前在GitHub上拥有较高的关注度,其核心价值在于将大模型的推理能力与本地系统操作相结合,打破了传统AI助手“仅能交互、无法执行”的局限。本文将从技术科普角度,围绕OpenClaw的核心定义、功能特性、技术细节及本地部署步骤展开,帮助开发者全面了解这款工具的原理与使用方法。 对于ZEEKLOG的开发者群体而言,了解OpenClaw的技术架构与应用场景,既能拓展AI智能体的认知边界,也能将其应用于日常开发、办公场景,提升工作效率。 本文将从「核心定义、功能特性、技术细节、本地部署」

腾讯AI两连发:QClaw vs WorkBuddy,谁才是真正的“AI打工人”?

腾讯AI两连发:QClaw vs WorkBuddy,谁才是真正的“AI打工人”?

文章目录 * 📖 介绍 📖 * 🏡 演示环境 🏡 * 📒 腾讯AI智能体"双雄"对比:QClaw vs WorkBuddy 📒 * 🔍 它们都从哪里来? * 🏢 [QClaw](https://qclaw.qq.com/):微信生态的"超级入口" * 💡 核心特点 * 🎸 适用人群 * ⚡ [WorkBuddy](https://workbuddycn.com/):企业办公的"全能搭档" * 💡 核心特点 * 🎸 适用人群 * 📊 核心功能对比 * 🎯 到底该选哪个? * ⚓️ 相关链接 ⚓️ 📖 介绍 📖 最近AI圈子里最火的话题,莫过于腾讯连续出招——先有开源界的 小龙虾 OpenClaw 在GitHub上掀起热潮,随后腾讯自己推出的 QClaw 和 WorkBuddy 也接踵而至。这三款产品虽然都打着"

拥抱AI,还是大剑师兰特2025年博客创作详细总结

拥抱AI,还是大剑师兰特2025年博客创作详细总结

一、2025年创作心得 2025年是我技术探索极具突破性的一年。最大的转变在于主动拥抱AI工具,将其深度融入前端开发流程——从代码生成、调试优化到文档撰写,AI不仅提升了效率,更成为我理解复杂逻辑的“思维外挂”,尤其在处理地图库的底层机制时,它帮我快速穿透迷雾。 我的技术重心依然锚定在WebGIS与三维可视化领域: * OpenLayers 与 Leaflet 的定制化交互逻辑更加精熟,结合 Mapbox GL JS 的矢量切片与样式能力,构建了多个高性能二维地图应用; * CesiumJS 成为三维地球项目的主力,深入研究了3D Tiles流式加载、自定义着色器及时空数据动态可视化; * Three.js 则用于轻量化场景或与Cesium融合,实现更灵活的局部三维效果。 * 尤为欣喜的是,Blender 技能的深化带来了质变。我不再仅用它做简单建模,而是系统学习了地理空间数据导入、地形生成、PBR材质制作及动画渲染。如今,我能将Blender产出的精细3D资产无缝集成到Cesium/Three.js场景中,让数字孪生项目兼具真实感与性能。 这一年,AI是加速器,地图框