从零开始使用ISSACLAB训练自己的机器人行走

从零开始使用ISSACLAB训练自己的机器人行走

ISAACLAB入门教程

作者:陈维耀

1. 环境配置

1.1 推荐配置

  • 操作系统: Ubuntu 22.04 LTS
  • 显卡: NVIDIA RTX 4080或以上

1.2 ubuntu 22.04 LTS安装

参考ZEEKLOG的Ubuntu 16.04 LTS安装教程,将其中的ubuntu 16.04镜像文件替换为ubuntu 22.04镜像文件,其他步骤保持不变,建议/home与/usr的硬盘容量均不少于200G。

1.3 安装NVIDIA驱动

根据自身显卡型号与操作系统,选择对应的显卡驱动,建议选择550.xxx.xxx版本的显卡驱动,按照教程进行安装即可,安装完成后在终端输入nvidia-smi,若出现以下信息则表示驱动安装成功:

Thu Jun 5 15:49:51 2025 +-----------------------------------------------------------------------------------------+ | NVIDIA-SMI 550.54.14 Driver Version: 550.54.14 CUDA Version: 12.4 | |-----------------------------------------+------------------------+----------------------+ | GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+========================+======================| | 0 NVIDIA GeForce RTX 4070 ... Off | 00000000:01:00.0 On | N/A | | 30% 58C P2 130W / 245W | 6085MiB / 12282MiB | 80% Default | | | | N/A | +-----------------------------------------+------------------------+----------------------+ 

1.4 安装cuda,cudnn

1.4.1 安装cuda

根据前面的nvidia显卡驱动的信息显示,选择对应的cuda版本进行安装,比如上面的信息显示CUDA Version: 12.4,则可安装CUDA 12.4版本及更低版本。可以在NVIDIA的CUDA Toolkit Archive中,根据本机的操作系统找到对应版本的安装包,比如:

linux->x86_64->ubuntu->22.04->deb(local)。 

具体安装步骤:

wget https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2204/x86_64/cuda-ubuntu2204.pin sudomv cuda-ubuntu2204.pin /etc/apt/preferences.d/cuda-repository-pin-600 wget https://developer.download.nvidia.com/compute/cuda/12.4.0/local_installers/cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb sudo dpkg -i cuda-repo-ubuntu2204-12-4-local_12.4.0-550.54.14-1_amd64.deb sudocp /var/cuda-repo-ubuntu2204-12-4-local/cuda-*-keyring.gpg /usr/share/keyrings/ sudoapt-get update sudoapt-get-yinstall cuda-toolkit-12-4 

安装后设置环境变量:

echo'export PATH=/usr/local/cuda-12.4/bin:$PATH'>> ~/.bashrc echo'export LD_LIBRARY_PATH=/usr/local/cuda-12.4/lib64:$LD_LIBRARY_PATH'>> ~/.bashrc echo'export CUDA_HOME=/usr/local/cuda-12.4'>> ~/.bashrc source ~/.bashrc 

验证CUDA安装是否成功,可以在终端输入以下命令:

nvcc --version

如果安装成功,会显示CUDA的版本信息,如下所示:

nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Thu_Mar_28_02:18:24_PDT_2024 Cuda compilation tools, release 12.4, V12.4.131 Build cuda_12.4.r12.4/compiler.34097967_0 
1.4.2 安装cudnn

前往nvidia的cuDNN Archive下载对应版本的cuDNN。选择与CUDA版本相匹配的cuDNN版本,比如CUDA 12.4对应的cuDNN版本为8.9.2。
找到下载的cudnn的deb文件所在目录,打开终端,执行下述命令

sudo dpkg -i cudnn-local-repo-ubuntu2204-xxxxxxxxxxxxxx.deb # 替换为实际的文件名sudocp /var/cudnn-local-repo-ubuntu2204-9.10.1/cudnn-*-keyring.gpg /usr/share/keyrings/ sudoapt-get update sudoapt-get-yinstall cudnn-cuda-12 

1.5 安装pytorch

根据前面下载的cuda版本,在pytorch官网的找到对应的pytorch安装指令,在自己的python环境中执行以下命令安装pytorch(可以等后续创建了python环境后进行安装),以下命令适用于CUDA 12.4版本的pytorch安装:

conda installpytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4-c pytorch -c nvidia 

1.6 安装anaconda

参考Anaconda安装教程

1.7 安装isaacsim

可以参考isaacsim官方安装教程,以下是简要步骤:

 conda create -n env_isaaclab python=3.10 conda activate env_isaaclab pip install--upgrade pip conda installpytorch==2.4.0 torchvision==0.19.0 torchaudio==2.4.0 pytorch-cuda=12.4-c pytorch -c nvidia # 前面确定的pytorch版本 pip install'isaacsim[all,extscache]==4.5.0' --extra-index-url https://pypi.nvidia.com # 验证安装是否成功,首次启动isaacsim需要较长时间 isaacsim 

第一次运行将提示用户接受 NVIDIA 软件许可协议。 要接受 EULA,请在出现以下消息提示时回复:

By installing or using Isaac Sim, I agree to the terms of NVIDIA SOFTWARE LICENSE AGREEMENT (EULA) in https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-software-license-agreement Do you accept the EULA? (Yes/No): Yes 

1.8 资产配置

在isaacsim的官方网站下载4.5版本的isaacsim及isaacsim的资产包,资产包为92g
按照教程进行资产配置
此外,使用vscode进入isaacsim的库文件目录,在全局搜索中搜索默认的资产目录

https://omniverse-content-production.s3-us-west-2.amazonaws.com/Assets/Isaac/4.5 

全部替换成自己的资产目录

https://your-asset-directory/Isaac/4.5 

1.9 安装isaaclab

可以通过git拉取isaaclab的代码库,或者之前跳转到官方github页面下载zip包,以下是通过git拉取的命令:

git clone [email protected]:isaac-sim/IsaacLab.git 

安装后进入isaaclab目录,执行以下命令安装相关依赖:

./isaaclab.sh --install# or "./isaaclab.sh -i"

验证安装是否成功,在isaaclab.sh所在目录执行:

./isaaclab.sh -p scripts/tutorials/00_sim/create_empty.py 

2. 机器人注册

2.1 机器人usd文件获取

可以通过solidworks导出机器人的urdf文件,然后打开isaacsim,点击左上角的File->Import->Import URDF,选择导出的urdf文件,导入后会生成一个usd文件,保存到指定目录,若是移动机器人,记得选择可动基座,若导入时卡住或者报错,检查urdf的完整性,检查stl路径,若package://...的路径格式不可行,建议替换成相对或绝对路径。

2.2 机器人注册

# 进入目录 cd ~/isaaclab/source/isaaclab_tasks/isaaclab_tasks/manager_based/locomotion/velocity/config 

复制已有的h1文件夹,重命名为你的机器人名称,比如这里使用op3,创建一个op3.py作为op3机器人的参数文件,当前的目录结构应如下,按需求补充缺失脚本

isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── __init__.py └── rsl_rl_ppo_cfg.py └── skrl_flat_ppo_cfg.yaml └── skrl_rough_ppo_cfg.yaml └── __init__.py └── op3.py # 新增 └── flat_env_cfg.py └── rough_env_cfg.py └── check_op3.py └── velocity_env_cfg.py └── op3_velocity_env_cfg.py # 新增 
2.2.1 机器人参数文件编写
机器人参数文件:
isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── op3.py └── ... └── ... 

机器人参数文件op3.py的内容如下:

import isaaclab.sim as sim_utils from isaaclab.actuators import ImplicitActuatorCfg from isaaclab.assets.articulation import ArticulationCfg from isaaclab.utils.assets import ISAAC_NUCLEUS_DIR OP3_CFG = ArticulationCfg( spawn=sim_utils.UsdFileCfg( usd_path=f"xxx/op3.usd",# 替换为你前面导出的机器人usd文件路径 activate_contact_sensors=True, rigid_props=sim_utils.RigidBodyPropertiesCfg( disable_gravity=False, retain_accelerations=False, linear_damping=0.0, angular_damping=0.0, max_linear_velocity=20.0, max_angular_velocity=20.0, max_depenetration_velocity=10.0,), articulation_props=sim_utils.ArticulationRootPropertiesCfg( enabled_self_collisions=True, solver_position_iteration_count=4, solver_velocity_iteration_count=4),), init_state=ArticulationCfg.InitialStateCfg( pos=(0.0,0.0,0.28),# 机器人初始位置,具体高度可由脚本check_op3.py查看# 机器人各关节初始角度,这里对应urdf中的joint,所有可动的joint都需要写进来 joint_pos={"head_pan":0.0,"head_tilt":0.0,".*_hip_yaw":0.0,".*_hip_roll":0.0,".*_hip_pitch":0.0,".*_knee":0.0,".*_ank_pitch":0.0,".*_ank_roll":0.0,".*_sho_pitch":0.0,".*_sho_roll":0.0,".*_el":0.0,}, joint_vel={".*":0.0},), soft_joint_pos_limit_factor=0.7,## 同样的,将下面的关节名替换为你的机器人的关节joint名,其中,比如你的机器人没有手,可以将"arms"去掉 actuators={"legs": ImplicitActuatorCfg( joint_names_expr=[".*_hip_yaw",".*_hip_roll",".*_hip_pitch",".*_knee",".*_ank_pitch",".*_ank_roll"], effort_limit_sim=20, velocity_limit_sim=20.0, stiffness={".*_hip_yaw":100.0,".*_hip_roll":100.0,".*_hip_pitch":100.0,".*_knee":100.0,".*_ank_pitch":100.0,".*_ank_roll":100.0,}, damping={".*_hip_yaw":20.0,".*_hip_roll":20.0,".*_hip_pitch":20.0,".*_knee":20.0,".*_ank_pitch":20.0,".*_ank_roll":20.0,},),"arms": ImplicitActuatorCfg( joint_names_expr=[".*_sho_pitch",".*_sho_roll",".*_el"], effort_limit_sim=20, velocity_limit_sim=20.0, stiffness={".*_sho_pitch":100.0,".*_sho_roll":100.0,".*_el":100.0,}, damping={".*_sho_pitch":20.0,".*_sho_roll":20.0,".*_el":20.0,},),"head": ImplicitActuatorCfg( joint_names_expr=["head_pan","head_tilt"], effort_limit_sim=20, velocity_limit_sim=20.0, stiffness={"head_pan":100.0,"head_tilt":100.0,}, damping={"head_pan":20.0,"head_tilt":20.0,},),},)
机器人基座高度检查脚本:
isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── ... └── check_op3.py └── ... 

检查机器人基座高度check_op3.py的内容如下,可以先在op3.py中给机器人一个较高的初始基座高度,然后在这个脚本的输出中查看实际落地时机器人的基座高度,以调整机器人初始基座高度:

import argparse import torch from isaaclab.app import AppLauncher parser = argparse.ArgumentParser(description="This script demonstrates how to simulate bipedal robots.") AppLauncher.add_app_launcher_args(parser) args_cli = parser.parse_args() app_launcher = AppLauncher(args_cli) simulation_app = app_launcher.app import isaaclab.sim as sim_utils from isaaclab.assets import Articulation from isaaclab.sim import SimulationContext ## 改为你的机器人的参数配置脚本:from op3 import OP3_CFG defdesign_scene(sim: sim_utils.SimulationContext)->tuple[list, torch.Tensor]:"""Designs the scene.""" cfg = sim_utils.GroundPlaneCfg() cfg.func("/World/defaultGroundPlane", cfg) cfg = sim_utils.DomeLightCfg(intensity=2000.0, color=(0.75,0.75,0.75)) cfg.func("/World/Light", cfg) origins = torch.tensor([[0.0,-1.0,0.0],[0.0,0.0,0.0],[0.0,1.0,0.0],]).to(device=sim.device) hr2 = Articulation(OP3_CFG.replace(prim_path="/World/G1")) robots =[hr2]return robots, origins defrun_simulator(sim: sim_utils.SimulationContext, robots:list[Articulation], origins: torch.Tensor):"""Runs the simulation loop.""" sim_dt = sim.get_physics_dt() sim_time =0.0 count =0while simulation_app.is_running():if count %1000==0: sim_time =0.0 count =0for index, robot inenumerate(robots): joint_pos, joint_vel = robot.data.default_joint_pos, robot.data.default_joint_vel robot.write_joint_state_to_sim(joint_pos, joint_vel) root_state = robot.data.default_root_state.clone() root_state[:,:3]+= origins[index] robot.write_root_pose_to_sim(root_state[:,:7]) robot.write_root_velocity_to_sim(root_state[:,7:]) robot.reset()print(">>>>>>>> Reset!")for robot in robots: robot.set_joint_position_target(robot.data.default_joint_pos.clone()) robot.write_data_to_sim() sim.step() sim_time += sim_dt count +=1for robot in robots: robot.update(sim_dt) root_pos = robot.data.root_pos_w # 获取机器人根位置print(f"Robot height (z): {root_pos[:,2].item():.3f}")# 打印 z 坐标(高度)defmain(): sim_cfg = sim_utils.SimulationCfg(dt=0.005, device=args_cli.device,gravity=[0.0,0.0,-9.81],) sim = SimulationContext(sim_cfg) sim.set_camera_view(eye=[3.0,0.0,2.25], target=[0.0,0.0,1.0]) robots, origins = design_scene(sim) sim.reset()print("[INFO]: Setup complete...") run_simulator(sim, robots, origins)if __name__ =="__main__": main() simulation_app.close()
2.2.2 速度环境配置文件编写
isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── ... └── ... └── op3_velocity_env_cfg.py 

速度环境配置文件op3_velocity_env_cfg.py的内容如下,可以先复制原有的velocity_env_cfg.py,然后对照着修改,需要修改的地方有注释说明

# Copyright (c) 2022-2025, The Isaac Lab Project Developers.# All rights reserved.## SPDX-License-Identifier: BSD-3-Clauseimport math from dataclasses import MISSING import isaaclab.sim as sim_utils from isaaclab.assets import ArticulationCfg, AssetBaseCfg from isaaclab.envs import ManagerBasedRLEnvCfg from isaaclab.managers import CurriculumTermCfg as CurrTerm from isaaclab.managers import EventTermCfg as EventTerm from isaaclab.managers import ObservationGroupCfg as ObsGroup from isaaclab.managers import ObservationTermCfg as ObsTerm from isaaclab.managers import RewardTermCfg as RewTerm from isaaclab.managers import SceneEntityCfg from isaaclab.managers import TerminationTermCfg as DoneTerm from isaaclab.scene import InteractiveSceneCfg from isaaclab.sensors import ContactSensorCfg, RayCasterCfg, patterns from isaaclab.terrains import TerrainImporterCfg from isaaclab.utils import configclass from isaaclab.utils.assets import ISAAC_NUCLEUS_DIR, ISAACLAB_NUCLEUS_DIR from isaaclab.utils.noise import AdditiveUniformNoiseCfg as Unoise import isaaclab_tasks.manager_based.locomotion.velocity.mdp as mdp from isaaclab.terrains.config.rough import ROUGH_TERRAINS_CFG # isort: skipfrom isaaclab.managers import ActionTermCfg @configclassclassMySceneCfg(InteractiveSceneCfg):"""Configuration for the terrain scene with a legged robot."""# ground terrain terrain = TerrainImporterCfg( prim_path="/World/ground", terrain_type="generator", terrain_generator=ROUGH_TERRAINS_CFG, max_init_terrain_level=5, collision_group=-1, physics_material=sim_utils.RigidBodyMaterialCfg( friction_combine_mode="multiply", restitution_combine_mode="multiply", static_friction=1.0, dynamic_friction=1.0,), visual_material=sim_utils.MdlFileCfg( mdl_path=f"{ISAACLAB_NUCLEUS_DIR}/Materials/TilesMarbleSpiderWhiteBrickBondHoned/TilesMarbleSpiderWhiteBrickBondHoned.mdl", project_uvw=True, texture_scale=(0.25,0.25),), debug_vis=False,) robot: ArticulationCfg = MISSING height_scanner = RayCasterCfg( prim_path="{ENV_REGEX_NS}/Robot/robotis_op3/base",# 将机器人usd放入isaacsim,查看自己机器人的base_link名称,将这里的robotis_op3/base替换为自己的机器人名称/base_link的link名称 offset=RayCasterCfg.OffsetCfg(pos=(0.0,0.0,20.0)), attach_yaw_only=True, pattern_cfg=patterns.GridPatternCfg(resolution=0.1, size=[1.6,1.0]), debug_vis=False, mesh_prim_paths=["/World/ground"],) contact_forces = ContactSensorCfg(prim_path="{ENV_REGEX_NS}/Robot/robotis_op3/.*", history_length=3, track_air_time=True)# 将robotis_op3替换为自己的机器人名称 sky_light = AssetBaseCfg( prim_path="/World/skyLight", spawn=sim_utils.DomeLightCfg( intensity=750.0, texture_file=f"{ISAAC_NUCLEUS_DIR}/Materials/Textures/Skies/PolyHaven/kloofendal_43d_clear_puresky_4k.hdr",),)@configclassclassCommandsCfg:"""Command specifications for the MDP.""""""设定机器人在世界坐标系下跟踪期望的线速度和角速度""" base_velocity = mdp.UniformVelocityCommandCfg( asset_name="robot",# 作用对象(机器人名) resampling_time_range=(10.0,10.0),# 多少秒重新采样一次目标命令(这里固定为10秒) rel_standing_envs=0.02,# 站立环境的比例(通常用于curriculum或特殊奖励) rel_heading_envs=1.0,# 有heading命令的环境比例(1.0表示全部环境都用heading命令) heading_command=True,# 是否启用heading命令(目标朝向) heading_control_stiffness=0.5,# 朝向控制的刚度(影响朝向跟踪的“紧迫感”) debug_vis=True,# 是否可视化目标命令 ranges=mdp.UniformVelocityCommandCfg.Ranges( lin_vel_x=(-1.0,1.0),# 目标x方向线速度采样范围(前后) lin_vel_y=(-1.0,1.0),# 目标y方向线速度采样范围(左右) ang_vel_z=(-1.0,1.0),# 目标z轴角速度采样范围(旋转) heading=(-math.pi, math.pi)# 目标朝向采样范围(弧度,-π到π)),)@configclassclassActionsCfg:"""Action specifications for the MDP.""" joint_pos = mdp.JointPositionActionCfg( asset_name="robot", joint_names=[".*"], scale=0.5, use_default_offset=True)@configclassclassObservationsCfg:"""Observation specifications for the MDP."""@configclassclassPolicyCfg(ObsGroup):"""Observations for policy group.""" base_lin_vel = ObsTerm(func=mdp.base_lin_vel, noise=Unoise(n_min=-0.1, n_max=0.1)) base_ang_vel = ObsTerm(func=mdp.base_ang_vel, noise=Unoise(n_min=-0.2, n_max=0.2)) projected_gravity = ObsTerm( func=mdp.projected_gravity, noise=Unoise(n_min=-0.05, n_max=0.05),) velocity_commands = ObsTerm(func=mdp.generated_commands, params={"command_name":"base_velocity"}) joint_pos = ObsTerm(func=mdp.joint_pos_rel, noise=Unoise(n_min=-0.01, n_max=0.01)) joint_vel = ObsTerm(func=mdp.joint_vel_rel, noise=Unoise(n_min=-1.5, n_max=1.5)) actions = ObsTerm(func=mdp.last_action) height_scan = ObsTerm( func=mdp.height_scan, params={"sensor_cfg": SceneEntityCfg("height_scanner")}, noise=Unoise(n_min=-0.1, n_max=0.1), clip=(-1.0,1.0),)def__post_init__(self): self.enable_corruption =True self.concatenate_terms =True policy: PolicyCfg = PolicyCfg()@configclassclassEventCfg:"""Configuration for events.""" physics_material = EventTerm( func=mdp.randomize_rigid_body_material, mode="startup", params={"asset_cfg": SceneEntityCfg("robot", body_names=".*"),"static_friction_range":(0.8,0.8),"dynamic_friction_range":(0.6,0.6),"restitution_range":(0.0,0.0),"num_buckets":64,},) add_base_mass = EventTerm( func=mdp.randomize_rigid_body_mass, mode="startup", params={"asset_cfg": SceneEntityCfg("robot", body_names="base"),# 将base替换为自己的机器人base_link的link名称"mass_distribution_params":(-5.0,5.0),"operation":"add",},) base_external_force_torque = EventTerm( func=mdp.apply_external_force_torque, mode="reset", params={"asset_cfg": SceneEntityCfg("robot", body_names="base"),# 将base替换为自己的机器人base_link的link名称"force_range":(0.0,0.0),"torque_range":(-0.0,0.0),},) reset_base = EventTerm( func=mdp.reset_root_state_uniform, mode="reset", params={"pose_range":{"x":(-0.5,0.5),"y":(-0.5,0.5),"yaw":(-3.14,3.14)},"velocity_range":{"x":(-0.5,0.5),"y":(-0.5,0.5),"z":(-0.5,0.5),"roll":(-0.5,0.5),"pitch":(-0.5,0.5),"yaw":(-0.5,0.5),},},) reset_robot_joints = EventTerm( func=mdp.reset_joints_by_scale, mode="reset", params={"position_range":(0.5,1.5),"velocity_range":(0.0,0.0),},) push_robot = EventTerm( func=mdp.push_by_setting_velocity, mode="interval", interval_range_s=(10.0,15.0), params={"velocity_range":{"x":(-0.5,0.5),"y":(-0.5,0.5)}},)@configclassclassRewardsCfg:"""Reward terms for the MDP.""" track_lin_vel_xy_exp = RewTerm( func=mdp.track_lin_vel_xy_exp, weight=1.0, params={"command_name":"base_velocity","std": math.sqrt(0.25)}) track_ang_vel_z_exp = RewTerm( func=mdp.track_ang_vel_z_exp, weight=0.5, params={"command_name":"base_velocity","std": math.sqrt(0.25)})# -- penalties lin_vel_z_l2 = RewTerm( func=mdp.lin_vel_z_l2, weight=-2.0) ang_vel_xy_l2 = RewTerm( func=mdp.ang_vel_xy_l2, weight=-0.05) dof_torques_l2 = RewTerm( func=mdp.joint_torques_l2, weight=-1.0e-5) dof_acc_l2 = RewTerm( func=mdp.joint_acc_l2, weight=-2.5e-7) action_rate_l2 = RewTerm( func=mdp.action_rate_l2, weight=-0.01) feet_air_time = RewTerm( func=mdp.feet_air_time, weight=0.125, params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*ank_roll_link"),"command_name":"base_velocity","threshold":0.5,},# 将.*ank_roll_link替换为自己的机器人脚部关节link名称)# undesired_contacts = RewTerm(# func=mdp.undesired_contacts,# weight=-1.0,# params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*THIGH"), "threshold": 1.0},# ) # 暂不使用 flat_orientation_l2 = RewTerm( func=mdp.flat_orientation_l2, weight=0.0) dof_pos_limits = RewTerm( func=mdp.joint_pos_limits, weight=0.0)@configclassclassTerminationsCfg:"""Termination terms for the MDP.""" time_out = DoneTerm(func=mdp.time_out, time_out=True) base_contact = DoneTerm( func=mdp.illegal_contact, params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names="base"),"threshold":1.0},)# 将base替换为自己的机器人base_link的link名称@configclassclassCurriculumCfg:"""Curriculum terms for the MDP.""" terrain_levels = CurrTerm(func=mdp.terrain_levels_vel)@configclassclassLocomotionVelocityRoughEnvCfg(ManagerBasedRLEnvCfg):"""Configuration for the locomotion velocity-tracking environment.""" scene: MySceneCfg = MySceneCfg(num_envs=4096, env_spacing=2.5) observations: ObservationsCfg = ObservationsCfg() actions: ActionsCfg = ActionsCfg() commands: CommandsCfg = CommandsCfg() rewards: RewardsCfg = RewardsCfg() terminations: TerminationsCfg = TerminationsCfg() events: EventCfg = EventCfg() curriculum: CurriculumCfg = CurriculumCfg()def__post_init__(self):"""Post initialization.""" self.decimation =4 self.episode_length_s =20.0# simulation settings self.sim.dt =0.005 self.sim.render_interval = self.decimation self.sim.physics_material = self.scene.terrain.physics_material self.sim.physx.gpu_max_rigid_patch_count =10*2**15if self.scene.height_scanner isnotNone: self.scene.height_scanner.update_period = self.decimation * self.sim.dt if self.scene.contact_forces isnotNone: self.scene.contact_forces.update_period = self.sim.dt ifgetattr(self.curriculum,"terrain_levels",None)isnotNone:if self.scene.terrain.terrain_generator isnotNone: self.scene.terrain.terrain_generator.curriculum =Trueelse:if self.scene.terrain.terrain_generator isnotNone: self.scene.terrain.terrain_generator.curriculum =False
2.2.3 环境配置文件编写

##### 复杂地面环境参数配置脚本

isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── ... └── rough_env_cfg.py └── ... └── ... 

复杂地面环境配置文件rough_env_cfg.py的内容如下:

# Copyright (c) 2022-2025, The Isaac Lab Project Developers.# All rights reserved.## SPDX-License-Identifier: BSD-3-Clausefrom isaaclab.managers import RewardTermCfg as RewTerm from isaaclab.managers import SceneEntityCfg from isaaclab.utils import configclass import isaaclab_tasks.manager_based.locomotion.velocity.mdp as mdp from isaaclab_tasks.manager_based.locomotion.velocity.op3_velocity_env_cfg import LocomotionVelocityRoughEnvCfg, RewardsCfg from.op3 import OP3_CFG # 导入前面编写的机器人参数配置脚本import random @configclassclassOP3Rewards(RewardsCfg):#可以将全文的OP3替换为自己机器人的每次,后面不再提起这点"""Reward terms for the MDP.""""""机器人死亡时的惩罚""" termination_penalty = RewTerm( func=mdp.is_terminated, weight=-200.0) lin_vel_z_l2 =None"""奖励机器人在机器人自身的朝向坐标系下跟踪期望的xy线速度""" track_lin_vel_xy_exp = RewTerm( func=mdp.track_lin_vel_xy_yaw_frame_exp, weight=1.0, params={"command_name":"base_velocity","std":0.5},)"""奖励机器人在世界坐标系下跟踪期望的z轴角速度""" track_ang_vel_z_exp = RewTerm( func=mdp.track_ang_vel_z_world_exp, weight=1.0, params={"command_name":"base_velocity","std":0.5})"""奖励双足交替抬起(步态),鼓励有步态的行走,在飞行测试时,令weight=0即可""" feet_air_time = RewTerm( func=mdp.feet_air_time_positive_biped, weight=0.25, params={"command_name":"base_velocity","sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*ank_roll_link"),"threshold":0.4,},# 将.*ank_roll_link替换为自己的机器人脚部关节link名称)"""惩罚脚在地面滑动(非理想步态)""" feet_slide = RewTerm( func=mdp.feet_slide, weight=-0.25, params={"sensor_cfg": SceneEntityCfg("contact_forces", body_names=".*ank_roll_link"),"asset_cfg": SceneEntityCfg("robot", body_names=".*ank_roll_link"),},# 将.*ank_roll_link替换为自己的机器人脚部关节link名称)# Penalize ankle joint limits"""惩罚踝关节超出关节极限""" dof_pos_limits = RewTerm( func=mdp.joint_pos_limits, weight=-1.0, params={"asset_cfg": SceneEntityCfg("robot", joint_names=".*_ank_roll")})# 将.*_ank_roll替换为自己的机器人脚部关节joint名称"""惩罚髋关节(hip_yaw, hip_roll,hip_yaw)偏离默认值""" joint_deviation_hip = RewTerm( func=mdp.joint_deviation_l1, weight=-0.2, params={"asset_cfg": SceneEntityCfg("robot", joint_names=[".*_hip_roll",".*_hip_pitch",".*_hip_yaw"])},)# 将.*_hip_roll, .*_hip_pitch, .*_hip_yaw替换为自己的机器人髋关节joint名称"""惩罚膝关节(knee)偏离默认值""""""对应:Rough-OP3-train"""@configclassclassOP3RoughEnvCfg(LocomotionVelocityRoughEnvCfg): rewards: OP3Rewards = OP3Rewards()def__post_init__(self):super().__post_init__()# Scene self.scene.robot = OP3_CFG.replace(prim_path="{ENV_REGEX_NS}/Robot")if self.scene.height_scanner: self.scene.height_scanner.prim_path ="{ENV_REGEX_NS}/Robot/robotis_op3/base"# 将robotis_op3/base替换为自己的机器人名称/base_link的link名称 self.events.push_robot =None self.events.add_base_mass =None self.events.reset_robot_joints.params["position_range"]=(1.0,1.0) self.events.base_external_force_torque.params["asset_cfg"].body_names =[".*base"]# 将base替换为自己的机器人base_link的link名称 self.events.reset_base.params ={"pose_range":{"x":(-0.5,0.5),"y":(-0.5,0.5),"yaw":(-3.14,3.14)},"velocity_range":{"x":(0.0,0.0),"y":(0.0,0.0),"z":(0.0,0.0),"roll":(0.0,0.0),"pitch":(0.0,0.0),"yaw":(0.0,0.0),},} self.terminations.base_contact.params["sensor_cfg"].body_names =[".*base"] self.rewards.undesired_contacts =None self.rewards.flat_orientation_l2.weight =-1.0 self.rewards.dof_torques_l2.weight =0.0 self.rewards.action_rate_l2.weight =-0.005 self.rewards.dof_acc_l2.weight =-1.25e-7 self.commands.base_velocity.ranges.lin_vel_x =(0.5,1.0)# 前后速度范围 self.commands.base_velocity.ranges.lin_vel_y =(0.0,0.0)# 左右速度范围 self.commands.base_velocity.ranges.ang_vel_z =(-1.0,1.0)# 旋转速度范围 self.terminations.base_contact.params["sensor_cfg"].body_names =".*base""""对应:Rough-OP3-Play"""@configclassclassOP3RoughEnvCfg_PLAY(OP3RoughEnvCfg):def__post_init__(self):super().__post_init__() self.scene.num_envs =1 self.scene.env_spacing =2.5 self.episode_length_s =40.0 self.scene.terrain.max_init_terrain_level =Noneif self.scene.terrain.terrain_generator isnotNone: self.scene.terrain.terrain_generator.num_rows =5 self.scene.terrain.terrain_generator.num_cols =5 self.scene.terrain.terrain_generator.curriculum =False# 与OP3RoughEnvCfg的commands对应: self.commands.base_velocity.ranges.lin_vel_x =(1.0,1.0) self.commands.base_velocity.ranges.lin_vel_y =(0.0,0.0) self.commands.base_velocity.ranges.ang_vel_z =(-1.0,1.0) self.commands.base_velocity.ranges.heading =(0.0,0.0) self.observations.policy.enable_corruption =False self.events.base_external_force_torque =None self.events.push_robot =None

##### 平整地面环境参数配置脚本

isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── ... └── flat_env_cfg.py └── ... └── ... 

平地环境配置文件flat_env_cfg.py的内容如下:

# Copyright (c) 2022-2025, The Isaac Lab Project Developers.# All rights reserved.## SPDX-License-Identifier: BSD-3-Clausefrom isaaclab.utils import configclass from.rough_env_cfg import OP3RoughEnvCfg #一样的,将脚本中的'OP3'替换为自己的机器人名称,其他的不用改@configclassclassOP3FlatEnvCfg(OP3RoughEnvCfg):def__post_init__(self):super().__post_init__() self.scene.terrain.terrain_type ="plane" self.scene.terrain.terrain_generator =None self.scene.height_scanner =None self.observations.policy.height_scan =None self.curriculum.terrain_levels =None self.rewards.feet_air_time.weight =1.0 self.rewards.feet_air_time.params["threshold"]=0.6classOP3FlatEnvCfg_PLAY(OP3FlatEnvCfg):def__post_init__(self)->None:super().__post_init__() self.scene.num_envs =1 self.scene.env_spacing =2.5 self.observations.policy.enable_corruption =False self.events.base_external_force_torque =None self.events.push_robot =None
2.2.4 机器人训练环境注册
isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── ... └── __init__.py └── ... 

在__init__.py中注册机器人训练环境,内容如下,只需要把OP3替换为自己的机器人名称即可:

# Copyright (c) 2022-2025, The Isaac Lab Project Developers.# All rights reserved.## SPDX-License-Identifier: BSD-3-Clauseimport gymnasium as gym from.import agents ### Register Gym environments.##""" 地形:崎岖地形(rough terrain),有地形生成器和难度课程。 用途:标准训练环境,适合训练机器人在复杂地形上行走。 奖励、终止、观测等:完整,适合正式训练。 """ gym.register(id="Rough-OP3-train", entry_point="isaaclab.envs:ManagerBasedRLEnv", disable_env_checker=True, kwargs={"env_cfg_entry_point":f"{__name__}.rough_env_cfg:OP3RoughEnvCfg","rsl_rl_cfg_entry_point":f"{agents.__name__}.rsl_rl_ppo_cfg:OP3RoughPPORunnerCfg","skrl_cfg_entry_point":f"{agents.__name__}:skrl_rough_ppo_cfg.yaml",},)""" 地形:崎岖地形,但用于“Play”模式。 区别: 环境数量:1,更适合测试和可视化。 地形课程关闭,地形数量减少,内存占用低。 随机扰动/推搡等事件关闭,更稳定。 观测扰动关闭,便于观察真实表现。 用途:用于演示、可视化、调试和模型评估。 """ gym.register(id="Rough-OP3-Play", entry_point="isaaclab.envs:ManagerBasedRLEnv", disable_env_checker=True, kwargs={"env_cfg_entry_point":f"{__name__}.rough_env_cfg:OP3RoughEnvCfg_PLAY","rsl_rl_cfg_entry_point":f"{agents.__name__}.rsl_rl_ppo_cfg:OP3RoughPPORunnerCfg","skrl_cfg_entry_point":f"{agents.__name__}:skrl_rough_ppo_cfg.yaml",},)""" 地形:平地(flat terrain),无地形生成器。 区别: 无地形难度课程,地形始终为平面。 无高度扫描观测,观测量减少。 奖励参数适配平地。 用途:适合在平地上训练,便于对比和基础能力训练。 """ gym.register(id="Flat-OP3-train", entry_point="isaaclab.envs:ManagerBasedRLEnv", disable_env_checker=True, kwargs={"env_cfg_entry_point":f"{__name__}.flat_env_cfg:OP3FlatEnvCfg","rsl_rl_cfg_entry_point":f"{agents.__name__}.rsl_rl_ppo_cfg:OP3FlatPPORunnerCfg","skrl_cfg_entry_point":f"{agents.__name__}:skrl_flat_ppo_cfg.yaml",},)""" 地形:平地,Play模式。 区别: 环境数量少,适合测试。 无扰动、无观测噪声,便于可视化和调试。 用途:平地上的演示、可视化、调试和模型评估。 """ gym.register(id="Flat-OP3-Play", entry_point="isaaclab.envs:ManagerBasedRLEnv", disable_env_checker=True, kwargs={"env_cfg_entry_point":f"{__name__}.flat_env_cfg:OP3FlatEnvCfg_PLAY","rsl_rl_cfg_entry_point":f"{agents.__name__}.rsl_rl_ppo_cfg:OP3FlatPPORunnerCfg","skrl_cfg_entry_point":f"{agents.__name__}:skrl_flat_ppo_cfg.yaml",},)
2.2.5 强化学习训练参数脚本修改
isaaclab_tasks └── manager_based └── locomotion └── velocity ├── config ├── op3 └── agents └── __init__.py └── rsl_rl_ppo_cfg.py └── skrl_flat_ppo_cfg.yaml └── skrl_rough_ppo_cfg.yaml └── ... 
rsl_rl_ppo_cfg.py:
# Copyright (c) 2022-2025, The Isaac Lab Project Developers.# All rights reserved.## SPDX-License-Identifier: BSD-3-Clausefrom isaaclab.utils import configclass from isaaclab_rl.rsl_rl import RslRlOnPolicyRunnerCfg, RslRlPpoActorCriticCfg, RslRlPpoAlgorithmCfg @configclassclassOP3RoughPPORunnerCfg(RslRlOnPolicyRunnerCfg):#将'OP3'替换为自己的机器人名称即可,其他参数可不改 num_steps_per_env =24 max_iterations =3000 save_interval =50 experiment_name ="OP3_rough" empirical_normalization =False policy = RslRlPpoActorCriticCfg( init_noise_std=1.0, actor_hidden_dims=[512,256,128], critic_hidden_dims=[512,256,128], activation="elu",) algorithm = RslRlPpoAlgorithmCfg( value_loss_coef=1.0, use_clipped_value_loss=True, clip_param=0.2, entropy_coef=0.01, num_learning_epochs=5, num_mini_batches=4, learning_rate=1.0e-3, schedule="adaptive", gamma=0.99, lam=0.95, desired_kl=0.01, max_grad_norm=1.0,)@configclassclassOP3FlatPPORunnerCfg(OP3RoughPPORunnerCfg):def__post_init__(self):super().__post_init__() self.max_iterations =1000 self.experiment_name ="OP3_flat" self.policy.actor_hidden_dims =[128,128,128] self.policy.critic_hidden_dims =[128,128,128]
skrl_flat_ppo_cfg.yaml 和 skrl_rough_ppo_cfg.yaml:

skrl_flat_ppo_cfg.yaml 和skrl_rough_ppo_cfg.yaml的内容中,只需要把OP3替换为自己的机器人名称即可,其他参数可不改。

skrl_flat_ppo_cfg.yaml的内容如下:

seed:42# Models are instantiated using skrl's model instantiator utility# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.htmlmodels:separate:Falsepolicy:# see gaussian_model parametersclass: GaussianMixin clip_actions:Falseclip_log_std:Truemin_log_std:-20.0max_log_std:2.0initial_log_std:0.0network:-name: net input: STATES layers:[128,128,128]activations: elu output: ACTIONS value:# see deterministic_model parametersclass: DeterministicMixin clip_actions:Falsenetwork:-name: net input: STATES layers:[128,128,128]activations: elu output: ONE # Rollout memory# https://skrl.readthedocs.io/en/latest/api/memories/random.htmlmemory:class: RandomMemory memory_size:-1# automatically determined (same as agent:rollouts)# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)# https://skrl.readthedocs.io/en/latest/api/agents/ppo.htmlagent:class: PPO rollouts:24learning_epochs:5mini_batches:4discount_factor:0.99lambda:0.95learning_rate:1.0e-03learning_rate_scheduler: KLAdaptiveLR learning_rate_scheduler_kwargs:kl_threshold:0.01state_preprocessor:nullstate_preprocessor_kwargs:nullvalue_preprocessor:nullvalue_preprocessor_kwargs:nullrandom_timesteps:0learning_starts:0grad_norm_clip:1.0ratio_clip:0.2value_clip:0.2clip_predicted_values:Trueentropy_loss_scale:0.01value_loss_scale:1.0kl_threshold:0.0rewards_shaper_scale:1.0time_limit_bootstrap:False# logging and checkpointexperiment:directory:"OP3_flat"experiment_name:""write_interval: auto checkpoint_interval: auto # Sequential trainer# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.htmltrainer:class: SequentialTrainer timesteps:24000environment_info: log 

skrl_rough_ppo_cfg.yaml的内容如下:

seed:42# Models are instantiated using skrl's model instantiator utility# https://skrl.readthedocs.io/en/latest/api/utils/model_instantiators.htmlmodels:separate:Falsepolicy:# see gaussian_model parametersclass: GaussianMixin clip_actions:Falseclip_log_std:Truemin_log_std:-20.0max_log_std:2.0initial_log_std:0.0network:-name: net input: STATES layers:[512,256,128]activations: elu output: ACTIONS value:# see deterministic_model parametersclass: DeterministicMixin clip_actions:Falsenetwork:-name: net input: STATES layers:[512,256,128]activations: elu output: ONE # Rollout memory# https://skrl.readthedocs.io/en/latest/api/memories/random.htmlmemory:class: RandomMemory memory_size:-1# automatically determined (same as agent:rollouts)# PPO agent configuration (field names are from PPO_DEFAULT_CONFIG)# https://skrl.readthedocs.io/en/latest/api/agents/ppo.htmlagent:class: PPO rollouts:24learning_epochs:5mini_batches:4discount_factor:0.995lambda:0.95learning_rate:1.0e-03learning_rate_scheduler: KLAdaptiveLR learning_rate_scheduler_kwargs:kl_threshold:0.01state_preprocessor:nullstate_preprocessor_kwargs:nullvalue_preprocessor:nullvalue_preprocessor_kwargs:nullrandom_timesteps:0learning_starts:0grad_norm_clip:1.0ratio_clip:0.2value_clip:0.2clip_predicted_values:Trueentropy_loss_scale:0.01value_loss_scale:1.0kl_threshold:0.0rewards_shaper_scale:1.0time_limit_bootstrap:False# logging and checkpointexperiment:directory:"OP3_rough"experiment_name:""write_interval: auto checkpoint_interval: auto # Sequential trainer# https://skrl.readthedocs.io/en/latest/api/trainers/sequential.htmltrainer:class: SequentialTrainer timesteps:72000environment_info: log 

3.机器人训练与测试

3.1 机器人训练

在完成前面的注册机器人训练环境的所有操作后,进入isaaclab目录,在终端执行训练语句

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Flat-OP3-train --max_iterations3000--seed42--headless

这里的训练框架为rsl-rl,你也可以使用isaaclab/scripts/reinforcement_learning/...路径下的其他训练框架,显然这里的训练环境就是前面注册的Flat-OP3-train,你也可以使用复杂地面环境Rough-OP3-train
此外,关于训练时填入的参数说明可以通过下面语句获取:

 ./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/train.py --task Flat-OP3-train -h
usage: train.py [-h] [--video] [--video_length VIDEO_LENGTH] [--video_interval VIDEO_INTERVAL] [--num_envs NUM_ENVS] [--task TASK] [--seed SEED] [--max_iterations MAX_ITERATIONS] [--distributed] [--experiment_name EXPERIMENT_NAME] [--run_name RUN_NAME] [--resume RESUME] [--load_run LOAD_RUN] [--checkpoint CHECKPOINT] [--logger {neptune,tensorboard,wandb}] [--log_project_name LOG_PROJECT_NAME] [--headless] [--livestream {0,1,2}] [--enable_cameras] [--device DEVICE] [--verbose] [--info] [--experience EXPERIENCE] [--kit_args KIT_ARGS] Train an RL agent with RSL-RL. options: -h, --help show this help message and exit --video Record videos during training. --video_length VIDEO_LENGTH Length of the recorded video (in steps). --video_interval VIDEO_INTERVAL Interval between video recordings (in steps). --num_envs NUM_ENVS Number of environments to simulate. --task TASK Name of the task. --seed SEED Seed used for the environment --max_iterations MAX_ITERATIONS RL Policy training iterations. --distributed Run training with multiple GPUs or nodes. rsl_rl: Arguments for RSL-RL agent. --experiment_name EXPERIMENT_NAME Name of the experiment folder where logs will be stored. --run_name RUN_NAME Run name suffix to the log directory. --resume RESUME Whether to resume from a checkpoint. --load_run LOAD_RUN Name of the run folder to resume from. --checkpoint CHECKPOINT Checkpoint file to resume from. --logger {neptune,tensorboard,wandb} Logger module to use. --log_project_name LOG_PROJECT_NAME Name of the logging project when using wandb or neptune. app_launcher arguments: Arguments for the AppLauncher. For more details, please check the documentation. --headless Force display off at all times. --livestream {0,1,2} Force enable livestreaming. Mapping corresponds to that for the `LIVESTREAM` environment variable. --enable_cameras Enable camera sensors and relevant extension dependencies. --device DEVICE The device to run the simulation on. Can be "cpu", "cuda", "cuda:N", where N is the device ID --verbose Enable verbose-level log output from the SimulationApp. --info Enable info-level log output from the SimulationApp. --experience EXPERIENCE The experience file to load when launching the SimulationApp. If an empty string is provided, the experience file is determined based on the headless flag. If a relative path is provided, it is resolved relative to the `apps` folder in Isaac Sim and Isaac Lab (in that order). --kit_args KIT_ARGS Command line arguments for Omniverse Kit as a string separated by a space delimiter. Example usage: --kit_args "--ext- folder=/path/to/ext1 --ext- folder=/path/to/ext2" 

3.2 机器人测试

在完成训练后,你会发现在~/isaaclab/logs/rsl_rl/OP3_flat路径下有很多.pt文件,这些文件就是训练过程中保存的强化学习网络模型参数文件,你可以使用这些参数文件来测试机器人的行走能力。

比如,我前面训练了3000轮,那我就选取最后一次获取的.pt文件~/model_2999.pt,在终端执行下面的测试语句:

./isaaclab.sh -p scripts/reinforcement_learning/rsl_rl/play.py --task Flat-OP3-Play --checkpoint /home/suiyi/isaacsim/isaaclab/logs/rsl_rl/OP3_flat/2025-06-05_19-31-40/model_2999.pt 

Read more

建议收藏!我开发了一个免费无限制的AI绘画公益站!

建议收藏!我开发了一个免费无限制的AI绘画公益站!

大家好,最近我做了一个小网站,叫 Dreamify ,一个可以让你随便玩AI画画的小工具。不收费、不限次数、不用登录,想画就画,全凭兴趣。 今天就想简单分享一下它,顺便邀请你也来玩玩看。 🎨 为什么要做这个? 其实也没啥特别的理由,就是觉得现在的AI绘画挺好玩的,但很多平台要么要钱,要么限制多,用起来不太自由。 比如典型的LibLib和Running Hub,虽然有着少量的白嫖份额,但是每次都要排队,想起来就一肚子火,气抖冷,瞧不起我们白嫖用户(doge)? 所以我就自己搭了个小站,让大家都能轻松体验AI艺术的魅力。不是为了赚钱,就是希望你能在这里找到点灵感或者乐趣。 有人可能会问,为啥不用豆包? 豆包现在的生图效果虽然不错,但是使用的模型是字节自研的seed系列模型,如果想体验一下目前开源且先进的AI绘画模型,还是少不了要用第三方网站。 一句话,想用就用!真要用到网站付不起GPU的账单了,我也认了。 🔥 它能干嘛? 输入文字就能画图 :比如“一只坐在沙发上看书的柴犬”,它就能生成一张图。 上传参考图 :你可以给AI一张图作为参考,让它根据你的描述进行创

2025年当前主流AI写作工具网站的详解

2025年当前主流AI写作工具网站的详解

作为一个过来人,我非常理解写论文有多么痛苦!! 选题难、找不到合适的参考资料、思路总是卡壳~ 好不容易写完了,导师又挑三拣四地不满意,甚至还要被查重率折磨…… 这些问题估计折磨过不少小伙伴吧? 但好在现在是2025年了啊!AI写作工具已经成熟了不少了! 拯救为论文抓狂的同学们也不是梦啊! 为了避免大家踩坑,今天我整理了一份当前主流的AI论文写作工具网站详解~看完记得收藏哟~ 1.收藏警告-2025年当前主流AI写作工具网站详解来了! 2.先别走!这份AI写作工具使用指南也请查收哟~ 一、收藏警告-2025年当前主流AI写作工具网站详解来了! 1.掌桥科研【AI毕业论文写作】 https://www.zhangqiaokeyan.com/ai/thesis.html?from=04-401-ailwxz-a-8471 我强烈推荐的肯定是掌桥科研【AI毕业论文写作】啦~ ①这个平台最让人放心的一点就是它的文献数据库特别全! 自带文献库的AI论文写作工具见过没? 掌桥科研【AI毕业论文写作】就是! 他们家有超过3亿篇中英文文献资源,啥中文期刊、

论文写作神器!9款AI工具一键生成初稿,AIGC率低至7%轻松搞定

一、9款AI论文工具横向对比:选对工具效率提升10倍 作为论文写作新手,最头疼的莫过于“工具太多挑花眼”——到底哪款工具能生成初稿?哪款能降重?哪款适合文献检索?别慌,我整理了9款主流AI论文工具的核心参数对比表,帮你1分钟锁定适配需求的工具: 工具名称核心功能定位初稿生成能力AIGC率控制特色优势适用场景图灵论文AI写作助手一站式论文深度解决方案★★★★★(30分钟5万字)★★★★★(低至7%)文献综述/问卷数据/图表公式一键生成毕业论文、实证分析、导师意见修改SciSpace文献阅读+写作排版工具★★★☆☆★★☆☆☆AI术语解释、期刊格式自动适配外文文献阅读、期刊论文排版Kimi长文本处理+对话式写作辅助★★★★☆★☆☆☆☆超长上下文(支持百万字文档)文献总结、论文结构搭建知学空间免费论文资源库+写作参考★☆☆☆☆——海量毕业论文范文、学术资料写作思路拓展、结构参考豆包AI中文对话式写作辅助★★★☆☆★☆☆☆☆中文理解能力强、多模态交互选题 brainstorm、摘要生成ArXiv预印本文献库————前沿研究快速发布、免费开放理工科文献检索、最新研究跟踪ERIC教育领域专业

AI 代码助手:CodeGeex、RooCode 和 GitHub Copilot 对比

AI 代码助手:CodeGeex、RooCode 和 GitHub Copilot 对比 你想了解CodeGeex、RooCode(袋鼠代码)和GitHub Copilot三款主流AI代码助手的优劣势对比,核心是想明确它们在不同使用场景下的适配性,方便选择或组合使用。下面我会从核心定位、核心优势、主要劣势、适用场景四个维度,清晰对比三者的差异: 一、核心定位先明确 * GitHub Copilot:由微软+OpenAI联合开发,基于GPT系列大模型,深度集成GitHub生态,主打“通用型代码生成+全语言覆盖”,是目前市场渗透率最高的AI代码助手。 * CodeGeex:由智谱AI开发,国产开源代码大模型,主打“多语言支持+本地化部署+开源可控”,侧重中文场景和代码安全。 * RooCode(袋鼠代码):字节跳动推出的AI代码助手,主打“轻量高效+字节系生态适配+中文交互友好”,侧重中小开发者和快速开发场景。 二、优劣势详细对比