使用 LLaMA-Factory 微调 Qwen2.5 模型,并转换为 GGUF 格式部署

在开源大模型领域,Qwen 系列凭借强大的中文能力和友好的协议受到广泛欢迎。然而,直接使用基座模型往往无法满足特定业务场景的需求,需要通过微调来注入领域知识。微调后的模型如何高效部署?GGUF 格式是目前 llama.cpp 等推理后端广泛支持的格式,具有跨平台、内存映射等优点。本文将完整记录使用 LLaMA-Factory 微调 Qwen2.5-7B-Instruct 模型,并通过 llama.cpp 将微调后的模型转换为 GGUF 格式的全过程,并分享在转换过程中遇到的经典错误及其解决方案。

1.环境准备
我们在一台 Linux 服务器上操作,安装了 Conda 用于环境隔离。需要准备以下组件:
Python 3.10
LLaMA-Factory(用于微调)
llama.cpp(用于格式转换)
transformers、peft、accelerate 等依赖库

1.1 创建 Conda 环境
conda create -n llama_factory python=3.10 -y
conda activate llama_factory
1.2 安装 LLaMA-Factory
LLaMA-Factory 是一个高效的微调框架,支持多种模型和算法。我们通过源码安装:
git clone https://github.com/hiyouga/LLaMA-Factory.git
cd LLaMA-Factory
pip install -e “.[torch,metrics]”
安装过程中如果遇到依赖冲突,可适当调整 transformers 版本,但建议保持最新。

1.3 安装 llama.cpp
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
pip install -r requirements.txt
注意:转换脚本 convert_hf_to_gguf.py 依赖 transformers,需要保证其版本兼容。

2.使用 LLaMA-Factory 微调 Qwen2.5-7B-Instruct
以 Qwen2.5-7B-Instruct 为基座,使用自定义数据集进行指令微调。假设数据已准备为 JSON 格式,每条包含 instruction 和 output 字段。

2.1 准备数据
将数据集放在 LLaMA-Factory/data 目录下,并创建数据集配置文件 dataset_info.json,示例如下:

{ "my_dataset": { "file_name": "my_dataset.json", "columns": { "prompt": "instruction", "response": "output" } } } 

2.2 配置微调参数
LLaMA-Factory 支持通过命令行或 YAML 文件配置。这里我们使用命令行进行 LoRA 微调:

CUDA_VISIBLE_DEVICES=0 python src/train_bash.py \ --stage sft \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --dataset my_dataset \ --dataset_dir ./data \ --finetuning_type lora \ --lora_target q_proj,v_proj \ --output_dir ./output/qwen2.5-lora \ --overwrite_cache \ --per_device_train_batch_size 4 \ --gradient_accumulation_steps 4 \ --lr_scheduler_type cosine \ --logging_steps 10 \ --save_steps 500 \ --learning_rate 1e-4 \ --num_train_epochs 3 \ --fp16 

训练完成后,微调的 LoRA 权重保存在 ./output/qwen2.5-lora 目录中。

2.3 合并 LoRA 权重(如果需要导出完整模型)
如果希望得到一个完整的 HuggingFace 格式模型(而非仅 LoRA 适配器),可以使用 export_model.py 脚本合并:

python src/export_model.py \ --model_name_or_path Qwen/Qwen2.5-7B-Instruct \ --adapter_name_or_path ./output/qwen2.5-lora \ --template default \ --finetuning_type lora \ --export_dir ./output/qwen2.5-merged 

合并后的完整模型将保存在 ./output/qwen2.5-merged 中,包含所有必要的配置文件、分词器和权重文件。

3.将微调后的模型转换为 GGUF 格式
3.1 准备转换环境
为了转换,我们需要一个独立的 conda 环境,以免与 LLaMA-Factory 的依赖冲突。创建一个新环境并安装必要工具:

conda create -n llama.cpp python=3.10 -y conda activate llama.cpp pip install torch transformers sentencepiece protobuf 

3.2 使用 llama.cpp 的转换脚本
进入 llama.cpp 目录,执行转换命令(假设合并后的模型位于 /mnt/workspace/output/qwen2.5-merged):

cd /path/to/llama.cpp python convert_hf_to_gguf.py /mnt/workspace/output/qwen2.5-merged \ --outtype f16 \ --verbose \ --outfile /mnt/workspace/qwen2.5-7B-instruct.gguf 

3.3 遇到的经典错误及解决
在执行上述命令时,遇到了以下错误:

python llama.cpp/convert_hf_to_gguf.py /mnt/workspace/.cache/modelscope/models/Qwen/Qwen2.5-7B-Instruct-lora --outtype f16 --verbose --outfile /mnt/workspace/Meta-Llama-3-8B-Instruct-gguf.gguf INFO:hf-to-gguf:Loading model: Qwen2.5-7B-Instruct-lora INFO:hf-to-gguf:Model architecture: Qwen2ForCausalLM INFO:hf-to-gguf:gguf: loading model weight map from 'model.safetensors.index.json' INFO:hf-to-gguf:gguf: indexing model part 'model-00001-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00002-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00003-of-00004.safetensors' INFO:hf-to-gguf:gguf: indexing model part 'model-00004-of-00004.safetensors' INFO:gguf.gguf_writer:gguf: This GGUF file is for Little Endian only INFO:hf-to-gguf:Exporting model... INFO:hf-to-gguf:output.weight, torch.bfloat16 --> F16, shape = {3584, 152064} INFO:hf-to-gguf:token_embd.weight, torch.bfloat16 --> F16, shape = {3584, 152064} INFO:hf-to-gguf:blk.0.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.0.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.0.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.0.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.0.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.0.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.0.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.0.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.0.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.0.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.1.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.1.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.1.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.1.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.1.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.1.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.1.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.1.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.1.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.1.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.2.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.2.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.2.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.2.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.2.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.2.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.2.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.2.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.2.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.2.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.3.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.3.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.3.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.3.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.3.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.3.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.3.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.3.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.3.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.3.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.4.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.4.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.4.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.4.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.4.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.4.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.4.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.4.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.4.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.4.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.5.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.5.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.5.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.5.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.5.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.5.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.5.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.5.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.5.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.5.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.6.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.10.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.10.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.10.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.10.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.10.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.10.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.10.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.10.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.10.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.11.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.11.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.11.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.11.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.11.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.11.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.11.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.11.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.11.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.11.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.12.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.12.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.12.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.12.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.12.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.12.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.12.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.12.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.12.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.12.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.13.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.13.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.13.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.13.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.13.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.13.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.13.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.13.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.13.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.13.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.14.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.14.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.14.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.14.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.14.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.14.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.14.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.14.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.14.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.14.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.15.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.15.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.15.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.15.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.15.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.15.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.15.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.15.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.15.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.15.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.16.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.6.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.6.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.6.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.6.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.6.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.6.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.6.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.6.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.6.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.7.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.7.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.7.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.7.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.7.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.7.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.7.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.7.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.7.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.7.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.8.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.8.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.8.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.8.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.8.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.8.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.8.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.8.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.8.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.8.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.9.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.9.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.9.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.9.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.9.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.9.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.9.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.9.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.9.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.9.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.16.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.16.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.16.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.16.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.16.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.16.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.16.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.17.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.17.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.17.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.17.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.17.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.17.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.17.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.17.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.17.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.17.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.18.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.18.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.18.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.18.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.18.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.18.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.18.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.18.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.18.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.18.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.19.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.19.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.19.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.19.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.19.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.19.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.19.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.19.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.19.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.19.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.20.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.20.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.20.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.20.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.20.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.20.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.20.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.20.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.20.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.20.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.21.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.21.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.21.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.21.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.21.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.21.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.21.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.21.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.21.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.21.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.22.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.22.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.22.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.22.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.22.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.22.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.22.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.22.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.22.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.22.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.23.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.23.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.23.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.23.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.23.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.23.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.23.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.23.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.23.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.23.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.24.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.24.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.24.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.24.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.24.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.24.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.24.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.24.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.24.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.24.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.25.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.25.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.25.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.25.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.25.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.25.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.25.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.25.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.25.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.25.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.26.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.26.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.26.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.26.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.26.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.26.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.26.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.26.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.26.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.26.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.27.attn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.ffn_down.weight, torch.bfloat16 --> F16, shape = {18944, 3584} INFO:hf-to-gguf:blk.27.ffn_gate.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.27.ffn_up.weight, torch.bfloat16 --> F16, shape = {3584, 18944} INFO:hf-to-gguf:blk.27.ffn_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.attn_k.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.27.attn_k.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:blk.27.attn_output.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.27.attn_q.bias, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:blk.27.attn_q.weight, torch.bfloat16 --> F16, shape = {3584, 3584} INFO:hf-to-gguf:blk.27.attn_v.bias, torch.bfloat16 --> F32, shape = {512} INFO:hf-to-gguf:blk.27.attn_v.weight, torch.bfloat16 --> F16, shape = {3584, 512} INFO:hf-to-gguf:output_norm.weight, torch.bfloat16 --> F32, shape = {3584} INFO:hf-to-gguf:Set meta model INFO:hf-to-gguf:Set model parameters INFO:hf-to-gguf:gguf: context length = 32768 INFO:hf-to-gguf:gguf: embedding length = 3584 INFO:hf-to-gguf:gguf: feed forward length = 18944 INFO:hf-to-gguf:gguf: head count = 28 INFO:hf-to-gguf:gguf: key-value head count = 4 WARNING:hf-to-gguf:Unknown RoPE type: default INFO:hf-to-gguf:gguf: rope scaling type = NONE INFO:hf-to-gguf:gguf: rope theta = 1000000.0 INFO:hf-to-gguf:gguf: rms norm epsilon = 1e-06 INFO:hf-to-gguf:gguf: file type = 1 INFO:hf-to-gguf:Set model quantization version INFO:hf-to-gguf:Set model tokenizer Traceback (most recent call last): File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 3534, in set_vocab self._set_vocab_sentencepiece() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1358, in _set_vocab_sentencepiece tokens, scores, toktypes = self._create_vocab_sentencepiece() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1375, in _create_vocab_sentencepiece raise FileNotFoundError(f"File not found: {tokenizer_path}") FileNotFoundError: File not found: /mnt/workspace/.cache/modelscope/models/Qwen/Qwen2.5-7B-Instruct-lora/tokenizer.model During handling of the above exception, another exception occurred: Traceback (most recent call last): File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 11934, in <module> main() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 11928, in main model_instance.write() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 689, in write self.prepare_metadata(vocab_only=False) File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 830, in prepare_metadata self.set_vocab() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 3536, in set_vocab self._set_vocab_gpt2() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 1294, in _set_vocab_gpt2 tokens, toktypes, tokpre = self.get_vocab_base() File "/mnt/workspace/llama.cpp/convert_hf_to_gguf.py", line 978, in get_vocab_base tokenizer = AutoTokenizer.from_pretrained(self.dir_model) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/models/auto/tokenization_auto.py", line 814, in from_pretrained return tokenizer_class.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2029, in from_pretrained return cls._from_pretrained( File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_base.py", line 2261, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/models/qwen2/tokenization_qwen2_fast.py", line 129, in __init__ super().__init__( File "/root/miniconda3/envs/llama.cpp/lib/python3.10/site-packages/transformers/tokenization_utils_fast.py", line 111, in __init__ fast_tokenizer = TokenizerFast.from_file(fast_tokenizer_file) Exception: data did not match any variant of untagged enum ModelWrapper at line 757443 column 3 (llama.cpp) root@dsw-1662938-774cbc5758-kd9bv:/mnt/workspace# 

错误原因分析
该错误发生在转换脚本加载 tokenizer.json 文件时,提示 JSON 结构不符合预期。通常由两个原因引起:

tokenizer.json 文件损坏:可能是下载不完整或微调过程中被意外修改。

transformers 版本不兼容:某些较新的 tokenizer.json 格式需要特定版本的 transformers 才能正确解析。

解决方案
经过排查,发现是 transformers 版本问题。当前环境中的 transformers 为旧版(如 4.36.0),而 Qwen2.5 的 tokenizer 需要更新版本支持。我们通过强制安装 transformers==4.45.0 解决了问题:

bash
pip install --force-reinstall transformers==4.45.0
重新运行转换命令,成功导出 GGUF 文件!

注意:如果模型目录中的 tokenizer.json 确实损坏,可以从 HuggingFace 官方仓库重新下载覆盖。

3.4 验证转换结果
转换完成后,检查输出文件:

ls -lh /mnt/workspace/qwen2.5-7B-instruct.gguf
可以使用 llama.cpp 提供的简单测试工具验证模型加载:

./main -m /mnt/workspace/qwen2.5-7B-instruct.gguf -p “你好,请介绍一下你自己。” -n 100
如果正常输出,说明转换成功。

4.总结
通过本文的实践,完成了以下工作:
使用 LLaMA-Factory 对 Qwen2.5-7B-Instruct 进行了 LoRA 微调,并合并为完整模型。
利用 llama.cpp 的转换工具将微调后的模型转换为 GGUF 格式,以便高效部署。
解决了转换过程中遇到的 tokenizer.json 解析错误,关键在于确保 transformers 版本与模型兼容。

关键点总结:
版本兼容性:转换脚本对 transformers 版本敏感,建议使用较新稳定版(如 4.45.0)。
文件完整性:微调后务必检查 tokenizer.json 是否完好,必要时从官方源补充。
路径命名:转换命令中的输出文件名建议与模型对应,避免混淆(如本文示例中应避免误写成 Llama 相关名称)。
GGUF 格式的模型可以轻松在 llama.cpp、Ollama、LM Studio 等推理后端运行,极大地方便了本地部署。

Read more

AI绘画新体验:FLUX.1文生图+SDXL风格保姆级教程

AI绘画新体验:FLUX.1文生图+SDXL风格保姆级教程 你是否试过输入一句“赛博朋克雨夜东京街头”,3秒后眼前弹出一张光影锐利、霓虹浸染、细节炸裂的4K图像?这不是概念图,而是FLUX.1-dev-fp8-dit在ComfyUI中真实跑出来的第一帧结果。它不靠堆参数,不靠拉长步数,而是用FP8精度+DiT架构+SDXL Prompt风格协同发力,把“所想即所得”的AI绘画体验,真正拉进日常创作节奏。 1. 为什么这次文生图体验不一样? 过去我们用SDXL,要调提示词、选采样器、试CFG值、反复改尺寸、等20秒出图——像在调试一台精密仪器。而FLUX.1-dev-fp8-dit镜像一上手,你会发现:提示词更直给、风格更可控、出图更快、显存更省、效果更稳。 它不是另一个“又一个SD模型”,而是把三个关键能力拧成一股绳: * FLUX.1核心:基于DiT(Diffusion Transformer)架构的轻量高效主干,FP8低精度推理大幅降低显存占用,实测在RTX

简单通信落地:FPGA 实现 CAN 总线接口与数据帧解析

https://pan.baidu.com/s/1rDsLAXGj8WbX82teSkhuIw?pwd=1234 这份FPGA 系统学习详细资料包是个人花大量时间精心整理的,超多干货全覆盖,从基础到实战一站式搞定,不用再到处薅资料!网盘链接随时可能失效,提取码 1234,先保存再学习,别等失效拍大腿!🔗链接:https://pan.baidu.com/s/1rDsLAXGj8WbX82teSkhuIw?pwd=1234 ———————————————— 简单通信落地:FPGA 实现 CAN 总线接口与数据帧解析 CAN 总线在工业现场和汽车电子中应用极其广泛,它的可靠性、实时性和多主特性是 UART、SPI、I2C 无法比拟的。从零实现一个完整的 CAN 控制器确实有一定复杂度,但掌握核心的数据帧收发和解析能力,就能应对大多数 FPGA 与 CAN 总线交互的场景。下面我带你一步步落地。

【保姆级教程】从零入手:Python + Neo4j 构建你的第一个知识图谱

【保姆级教程】从零入手:Python + Neo4j 构建你的第一个知识图谱

摘要: 大数据时代,数据之间的关系往往比数据本身更有价值。传统的 SQL 数据库在处理复杂关系(如社交网络、推荐系统、风控分析)时显得力不从心,而 知识图谱 和 图数据库 Neo4j 正是为此而生。本文将带你从 0 基础出发,理解知识图谱核心概念,安装 Neo4j 环境,并手把手教你用 Python 代码构建一个生动的人物关系图谱。拒绝枯燥理论,全是实战干货! 一、 什么是知识图谱与 Neo4j? 在动手写代码之前,我们先用大白话把两个核心概念捋清楚。 1. 什么是知识图谱 (Knowledge Graph)? 不要被高大上的名字吓到。知识图谱本质上就是把世界上的事物(节点)和它们之间的联系(关系)画成一张巨大的网。 * Excel 思维: 罗列数据。例如:张三,25岁;李四,

Jetson Orin NX + Fast-LIO2自主无人机完整部署方案

Jetson Orin NX + Fast-LIO2自主无人机完整部署方案 🚀 本文完整介绍如何在Jetson Orin NX上构建一套完整的自主飞行四旋翼无人机系统,包括实时SLAM定位、自主路径规划和动态避障。 预计阅读时间: 15分钟 📑 文章目录 * 一、系统概述 * 二、硬件配置 * 三、软件架构 * 四、环境配置 * 五、关键模块部署 * 六、系统集成 * 七、常见问题 * 八、参考资源 一、系统概述 1.1 项目背景 在自主无人机领域,实现高精度定位和自主飞行一直是重要研究课题。本项目结合最新的SLAM算法(Fast-LIO2)、高效的路径规划和实时避障,在Jetson Orin NX这个边缘计算平台上实现了完整的自主飞行系统。 1.2 核心特性 ✨ 实时SLAM定位 - Fast-LIO2算法,100Hz频率,<2%