目录
jit部署测试命令
python -c "import torch;import torchvision;torch.jit.load('/shared_disk/models/others/nlf/models/nlf_l/nlf_l_multi_0.3.2.torchscript').cuda().eval()"
nlf-pipepine
https://github.com/isarandi/nlf-pipeline
pip install stcnbuf
cl: 命令行 error D8021 :无效的数值参数“/Wno-cpp”
pip install stcnbuf
pip install git+https://github.com/isarandi/nlf-pipeline.git
nlf-pipeline 依赖项:
cameravision-0.3.0.tar.gz
bodycompress-0.2.3.dev0.tar.gz
kornia
stcnbuf 人体分割,没有sam2好
blendipose-0.1.2 bodycompress-0.2.3.dev0 boxlib-0.2.1 bpy-3.6.0 cameravision-0.3.0 ffmpeg-python-0.2.0 framepump-0.1.3 future-1.0.0 nlf-pipeline-0.1.0 pyransac3d-0.6.0 pytorch-minimize-0.0.2 shapely-2.1.1 smplfitter-0.2.10 stcnbuf-0.2.1 yt-dlp-2025.5.22 zstandard-0.23.0
framepump库报错:
python -c "from framepump.videolib import (VideoFrames,get_duration,get_fps,get_reader,get_writer,num_frames,trim_video,video_audio_mux)"
解决方法1
from framepump.framepump import (
VideoFrames,
get_duration,
get_fps,
get_reader,
get_writer,
num_frames,
trim_video,
video_audio_mux,
)
from framepump.video_writing import VideoWriter
解决方法2,不要framepump库。
分割算法:stcn.pth
相机姿态估计:
Now we can estimate the camera motion:
python -m nlf_pipeline.run_slam \ --droid-model-path="$DATA_ROOT/models/droid.pth" \ --video-path="$INFERENCE_ROOT/videos_in/${vid}.mp4" \ --mask-path="$INFERENCE_ROOT/masks_semseg/${vid}_masks.pkl" \ --output-path="$INFERENCE_ROOT/cameras/${vid}.pkl" \ --smooth
This saves the camera motion as a pickled list of cameravision Camera
objects.
To verify that this step gave reasonable results, you can visualize the camera trajectory:
python -m nlf_pipeline.viz_camtraj --video-id=$vid --camera-view
vertex_subset
body_models = ['smpl', 'smplx']
vertex_subset = {
k: np.load(f'{DATA_ROOT}/body_models/{k}/vertex_subset_1024.npz')['i_verts'] for k in
body_models}