目录
<<<<<项目环境建议:纯Linux或双系统的Linux环境,慎用虚拟机的Linux环境>>>>>
《项目简介》
可直接进入 full tutorial for SO-100,按照流程操作
一、环境配置
1、创建虚拟环境
conda create -y -n lerobot python=3.10
conda activate lerobot
2、克隆项目并安装所需包
git clone https://github.com/huggingface/lerobot.git ~/lerobot
cd lerobot && pip install -e ".[feetech]"
conda install -y -c conda-forge ffmpeg
pip uninstall -y opencv-python
conda install -y -c conda-forge "opencv>=4.10.0"
注:出现报错可能需要安装的包
cd lerobot && pip install -e .
sudo apt-get install git-lfs
git lfs install
conda install -c conda-forge jpeg libtiff
pip install pynput==1.7.7
注:解决Unknow encoder libsvtav1的报错
/lerobot/lerobot/common/datasets/video_utils.py的134行左右,将libsvtav1修改为vcodec: str = "libopenh264"
二、主从臂硬件准备
1、舵机配置
(一)lerobot开源项目的舵机配置(操作记录)-CSDN博客https://blog.csdn.net/ou1531037815/article/details/144098305
(1)分别查看主从臂的开发板端口号
python lerobot/scripts/find_motors_bus_port.py
(2)分别设置主从臂的舵机
sudo chmod 777 /dev/ttyACM0
sudo chmod 777 /dev/ttyACM1
python lerobot/scripts/configure_motor.py \
--port /dev/ttyACM0 \
--brand feetech \
--model sts3215 \
--baudrate 1000000 \
--ID 1 <-- UPDATE HERE
ID:1-6
2、组装主从臂
3、查看主从臂端口号和相机端口号
python lerobot/scripts/find_motors_bus_port.py
sudo chmod 777 /dev/ttyACM0
sudo chmod 777 /dev/ttyACM1
python lerobot/common/robot_devices/cameras/opencv.py \
--images-dir outputs/images_from_opencv_cameras
4、修改配置文件
/lerobot/lerobot/common/robot_devices/robots/configs.py,修改四处地方:<-- UPDATE HERE
@RobotConfig.register_subclass("so100")
@dataclass
class So100RobotConfig(ManipulatorRobotConfig):
calibration_dir: str = ".cache/calibration/so100"
# `max_relative_target` limits the magnitude of the relative positional target vector for safety purposes.
# Set this to a positive scalar to have the same value for all motors, or a list that is the same length as
# the number of motors in your follower arms.
max_relative_target: int | None = None
leader_arms: dict[str, MotorsBusConfig] = field(
default_factory=lambda: {
"main": FeetechMotorsBusConfig(
port="/dev/ttyACM0", <-- UPDATE HERE
motors={
# name: (index, model)
"shoulder_pan": [1, "sts3215"],
"shoulder_lift": [2, "sts3215"],
"elbow_flex": [3, "sts3215"],
"wrist_flex": [4, "sts3215"],
"wrist_roll": [5, "sts3215"],
"gripper": [6, "sts3215"],
},
),
}
)
follower_arms: dict[str, MotorsBusConfig] = field(
default_factory=lambda: {
"main": FeetechMotorsBusConfig(
port="/dev/ttyACM1", <-- UPDATE HERE
motors={
# name: (index, model)
"shoulder_pan": [1, "sts3215"],
"shoulder_lift": [2, "sts3215"],
"elbow_flex": [3, "sts3215"],
"wrist_flex": [4, "sts3215"],
"wrist_roll": [5, "sts3215"],
"gripper": [6, "sts3215"],
},
),
}
)
cameras: dict[str, CameraConfig] = field(
default_factory=lambda: {
"laptop": OpenCVCameraConfig(
camera_index=2, <-- UPDATE HERE
fps=30,
width=640,
height=480,
),
"phone": OpenCVCameraConfig(
camera_index=4, <-- UPDATE HERE
fps=30,
width=640,
height=480,
),
}
)
mock: bool = False
5、主从臂校准
(1)校准从臂
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=calibrate \
--control.arms='["main_follower"]'
(2)校准主臂
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=calibrate \
--control.arms='["main_leader"]'
(3)遥操验证
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--robot.cameras='{}' \
--control.type=teleoperate
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=teleoperate
三、模型训练(在线采集数据)
1、前提准备
(1)激活环境
conda activate lerobot
cd lerobot
(2)登录huggingface
开启clash,再设置linux命令行代理
解决Linux访问HuggingFace的问题(操作记录)https://blog.csdn.net/ou1531037815/article/details/144570497
export http_proxy=http://127.0.0.1:7890
export https_proxy=http://127.0.0.1:7890
用个人申请的Access Tokens代替${HUGGINGFACE_TOKEN}
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
2、设置硬件权限并进行测试
sudo chmod 777 /dev/ttyACM0
sudo chmod 777 /dev/ttyACM1
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=teleoperate
测试硬件连接没问题就可以关掉 control_robot.py
3、采集数据
首先,新建一个叫“so100_test_v2”的本地数据集,并将该数据集上传到huggingface(第一次操作即可)
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=record \
--control.fps=30 \
--control.single_task="Pick up the white toy tiger and place it into the plate." \
--control.repo_id=${HF_USER}/so100_test_v2 \
--control.tags='["so100","tutorial"]' \
--control.warmup_time_s=5 \
--control.episode_time_s=15 \
--control.reset_time_s=5 \
--control.num_episodes=1 \
--control.push_to_hub=true
然后,在现有的本地数据集上继续录制,并将数据集也同步上传到huggingface(后续操作)
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=record \
--control.fps=30 \
--control.single_task="Pick up the white toy tiger and place it into the plate." \
--control.repo_id=${HF_USER}/so100_test_v2 \
--control.tags='["so100","tutorial"]' \
--control.warmup_time_s=5 \
--control.episode_time_s=15 \
--control.reset_time_s=5 \
--control.num_episodes=4 \
--control.push_to_hub=True \
--control.resume=true
4、数据查看&动作复现
python lerobot/scripts/visualize_dataset_html.py \
--repo-id ${HF_USER}/so100_test_v2
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=replay \
--control.fps=30 \
--control.repo_id=${HF_USER}/so100_test_v2 \
--control.episode=0
5、加载huggingface数据进行模型训练
python lerobot/scripts/train.py \
--dataset.repo_id=${HF_USER}/so100_test_v2 \
--policy.type=act \
--output_dir=outputs/train/act_so100_test_v2_act \
--job_name=act_so100_test_v2_act \
--policy.device=cuda \
--wandb.enable=false
四、模型测试
1、前提准备
(1)激活环境
conda activate lerobot
cd lerobot
(2)登录huggingface
开启clash,再设置linux命令行代理
解决Linux访问HuggingFace的问题(操作记录)https://blog.csdn.net/ou1531037815/article/details/144570497
export http_proxy=http://127.0.0.1:7890
export https_proxy=http://127.0.0.1:7890
huggingface-cli login --token ${HUGGINGFACE_TOKEN} --add-to-git-credential
HF_USER=$(huggingface-cli whoami | head -n 1)
echo $HF_USER
2、设置硬件权限并进行测试
sudo chmod 777 /dev/ttyACM0
sudo chmod 777 /dev/ttyACM1
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=teleoperate
测试硬件连接没问题就可以关掉 control_robot.py
3、加载权重进行模型测试
python lerobot/scripts/control_robot.py \
--robot.type=so100 \
--control.type=record \
--control.fps=30 \
--control.single_task="Pick up the white toy tiger and place it into the plate." \
--control.repo_id=${HF_USER}/eval_act_so100_test_v2_act \
--control.tags='["tutorial"]' \
--control.warmup_time_s=5 \
--control.episode_time_s=30 \
--control.reset_time_s=5 \
--control.num_episodes=4 \
--control.push_to_hub=false \
--control.policy.path=outputs/train/act_so100_test_v2_act/checkpoints/last/pretrained_model