OpenPose Demo

概述

首先,要忘记OpenPose库的源代码,在这里,只需要编译并使用Demo:./build/examples/openpose/openpose.bin

为了学习如何使用它,在命令行界面工具中运行./build/examples/openpose/openpose.bin --help,并读取所有可用的标志(只检查examples/openpose/openpose.cpp本身的标志,即来自Flags from examples/openpose/openpose.cpp:的部分),我们将在下面几节中详细介绍。

在Ubuntu、Mac和其他Unix系统中,可以使用任何命令行界面,比如TerminalTerminator。在Windows中,打开PowerShell(推荐)或Windows命令提示符(CMD),PowerShell可以通过Windows+X组合键,然后再按A键打开。如果您不熟悉这些非GUI工具,请观看Youtube视频教程。另外,请确保您位于项目的根目录中(即,在OpenPose文件夹下,而不是在build/windows/bin/文件夹下)。

在图像,视频或网络摄像上运行

查看OpenPose:快速入门

Pose + Face + Hands

查看OpenPose:快速入门

最高精度的配置

查看OpenPose:快速入门

降低延迟/滞后

一般来说,有三种方法来减少延迟(各有缺点):

  • 降低--output_resolution:它会稍微降低延迟,增加FPS。但是显示的图像质量会变差。
  • 降低--net_resolution--face_net_resolution--hand_net_resolution:这将增加FPS和减少延迟。但是精度会下降,特别是对图像中的小人。注意:为了获得最大的精度,请遵循文档OpenPose:快速入门的最高精度配置。
  • 启用--disable_multi_thread:应该减少延迟。但速度将下降到1-GPU的速度(因为它将只使用1个GPU)。请注意,它只适用于躯干检测,如果手和脸也提取,通常是不值得的。

Kinect 2.0 as Webcam on Windows 10

自从Windows 10诞生以来,Kinect 2.0就可以作为一个普通的摄像头来读取图像了。
你所需要做的就是进入设备管理器,展开kinect传感器设备选项卡,右键点击并更新WDF kinectSensor Interface的驱动程序。如果你已经有了另一个摄像头,断开连接或者使用--camera 2

JSON Output with No Visualization

下面的示例运行演示视频video.avi。并将输出的JSON文件保存在output/文件夹中。
注意:要理解JSON文件的格式,请参阅doc/output.md

# Only body
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0
# Body + face + hands
./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_json output/ --display 0 --render_pose 0 --face --hand

JSON输出+图像保存

下面的示例运行演示视频video.avi,渲染图像帧保存在output/result.avi上,并在output/中输出JSON文件。注意:请参阅doc/output.md了解JSON文件的格式。

./build/examples/openpose/openpose.bin --video examples/media/video.avi --write_video output/result.avi --write_json output/

# Fast method for speed
./build/examples/openpose/openpose.bin --hand
# Best results found with 6 scales
./build/examples/openpose/openpose.bin --hand --hand_scale_number 6 --hand_scale_range 0.4
# Adding tracking to Webcam (if FPS per GPU > 10 FPS) and Video
./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --hand_detector 3
# Multi-scale + tracking is also possible
./build/examples/openpose/openpose.bin --video examples/media/video.avi --hand --hand_scale_number 6 --hand_scale_range 0.4 --hand_detector 3

脸部和手的渲染(不包含姿势)

# CPU rendering (faster)
./build/examples/openpose/openpose.bin --render_pose 0 --face --face_render 1 --hand --hand_render 1
# GPU rendering
./build/examples/openpose/openpose.bin --render_pose 0 --face --face_render 2 --hand --hand_render 2

调试信息

# Basic information
./build/examples/openpose/openpose.bin --logging_level 3
# Showing all messages
./build/examples/openpose/openpose.bin --logging_level 0

选择 GPUs

下面的例子运行演示视频video.avi,并设置在2个GPU上并行运行,GPU 1和GPU 2(注意它会跳过GPU 0):

./build/examples/openpose/openpose.bin --video examples/media/video.avi --num_gpu 2 --num_gpu_start 1

热量图存储

主要参数

我们列举了一些最重要参数,想要获得这些所有参数的完整描述
可以查看参数的详细描述小节或运行./build/examples/openpose/openpost.bin --help

  • --face: 启用人脸关键点检测。
  • --hand: 启用手部关键点检测。
  • --video input.mp4: Read video.
  • --camera 3: Read webcam number 3.
  • --image_dir path_to_images/: Run on a folder with images.
  • --ip_camera http://iris.not.iac.es/axis-cgi/mjpg/video.cgi?resolution=320x240?x.mjpeg: Run on a streamed IP camera. See examples public IP cameras here.
  • --write_video path.avi: 将处理图像作为视频保存。
  • --write_images folder_path: 将处理过的图像保存到文件夹中。
  • --write_keypoint path/: Output JSON, XML or YML files with the people pose data on a folder.
  • --process_real_time: For video, it might skip frames to display at real time.
  • --disable_blending: If enabled, it will render the results (keypoint skeletons or heatmaps) on a black background, not showing the original image. Related: part_to_show, alpha_pose, and alpha_pose.
  • --part_to_show: Prediction channel to visualize.
  • --display 0: 关闭显示窗口。对于服务器或稍微加速OpenPose很有用。
  • --num_gpu 2 --num_gpu_start 1: Parallelize over this number of GPUs starting by the desired device id. By default it uses all the available GPUs.
  • --model_pose MPI: 要用到的模型, 会影响关键点的数量、速度和精度。
  • --logging_level 3: Logging messages threshold, range [0,255]: 0 will output any message & 255 will output none. Current messages in the range [1-4], 1 for low priority messages and 4 for important ones.

参数描述

每个参数拆分为参数名称、默认值和描述。

  1. Debugging/Other
  • DEFINE_int32(logging_level,3,“The logging level. Integer in the range [0, 255]. 0 will output any opLog() message, while 255 will not output any. Current OpenPose library messages are in the range 0-4: 1 for low priority messages and 4 for important ones.”);
  • DEFINE_bool(disable_multi_thread, false, “It would slightly reduce the frame rate in order to highly reduce the lag. Mainly useful for 1) Cases where it is needed a low latency (e.g., webcam in real-time scenarios with low-range GPU devices); and 2) Debugging OpenPose when it is crashing to locate the error.”);
  • DEFINE_int32(profile_speed, 1000, “If PROFILER_ENABLED was set in CMake or Makefile.config files, OpenPose will show some runtime statistics at this frame number.”);
  1. Producer
  • DEFINE_int32(camera, -1, “The camera index for cv::VideoCapture. Integer in the range [0, 9]. Select a negative number (by default), to auto-detect and open the first available camera.”);
  • DEFINE_string(camera_resolution, “-1x-1”, "Set the camera resolution (either --camera or --flir_camera). -1x-1 will use the default 1280x720 for --camera, or the maximum flir camera resolution available for --flir_camera");
  • DEFINE_string(video, “”, “Use a video file instead of the camera. Use examples/media/video.avi for our default example video.”);
  • DEFINE_string(image_dir, “”, “Process a directory of images. Use examples/media/ for our default example folder with 20 images. Read all standard formats (jpg, png, bmp, etc.).”);
  • DEFINE_bool(flir_camera, false, “Whether to use FLIR (Point-Grey) stereo camera.”);
  • DEFINE_int32(flir_camera_index, -1, “Select -1 (default) to run on all detected flir cameras at once. Otherwise, select the flir camera index to run, where 0 corresponds to the detected flir camera with the lowest serial number, and n to the n-th lowest serial number camera.”);
  • DEFINE_string(ip_camera, “”, “String with the IP camera URL. It supports protocols like RTSP and HTTP.”);
  • DEFINE_uint64(frame_first, 0, “Start on desired frame number. Indexes are 0-based, i.e., the first frame has index 0.”);
  • DEFINE_uint64(frame_step, 1, “Step or gap between processed frames. E.g., --frame_step 5 would read and process frames 0, 5, 10, etc…”);
  • DEFINE_uint64(frame_last, -1, “Finish on desired frame number. Select -1 to disable. Indexes are 0-based, e.g., if set to 10, it will process 11 frames (0-10).”);
  • DEFINE_bool(frame_flip, false, “Flip/mirror each frame (e.g., for real time webcam demonstrations).”);
  • DEFINE_int32(frame_rotate, 0, “Rotate each frame, 4 possible values: 0, 90, 180, 270.”);
  • DEFINE_bool(frames_repeat, false, “Repeat frames when finished.”);
  • DEFINE_bool(process_real_time, false, “Enable to keep the original source frame rate (e.g., for video). If the processing time is too long, it will skip frames. If it is too fast, it will slow it down.”);
  • DEFINE_string(camera_parameter_path, “models/cameraParameters/flir”, “String with the folder where the camera parameters are located. If there is only 1 XML file (for single video, webcam, or images from the same camera), you must specify the whole XML file path (ending in .xml).”);
  • DEFINE_bool(frame_undistort, false, "If false (default), it will not undistort the image, if true, it will undistortionate them based on the camera parameters found in camera_parameter_path");
  1. OpenPose
  • DEFINE_string(model_folder, “models/”, “Folder path (absolute or relative) where the models (pose, face, …) are located.”);
  • DEFINE_string(prototxt_path, “”, “The combination --model_folder + --prototxt_path represents the whole path to the prototxt file. If empty, it will use the default OpenPose ProtoTxt file.”);
  • DEFINE_string(caffemodel_path, “”, “The combination --model_folder + --caffemodel_path represents the whole path to the caffemodel file. If empty, it will use the default OpenPose CaffeModel file.”);
  • DEFINE_string(output_resolution, “-1x-1”, “The image resolution (display and output). Use “-1x-1” to force the program to use the input image resolution.”);
  • DEFINE_int32(num_gpu, -1, “The number of GPU devices to use. If negative, it will use all the available GPUs in your machine.”);
  • DEFINE_int32(num_gpu_start, 0, “GPU device start number.”);
  • DEFINE_int32(keypoint_scale, 0, “Scaling of the (x,y) coordinates of the final pose data array, i.e., the scale of the (x,y) coordinates that will be saved with the write_json & write_keypoint flags. Select 0 to scale it to the original source resolution; 1to scale it to the net output size (set with net_resolution); 2 to scale it to the final output size (set with resolution); 3 to scale it in the range [0,1], where (0,0) would be the top-left corner of the image, and (1,1) the bottom-right one; and 4 for range [-1,1], where (-1,-1) would be the top-left corner of the image, and (1,1) the bottom-right one. Non related with scale_number and scale_gap.”);
  • DEFINE_int32(number_people_max, -1, “This parameter will limit the maximum number of people detected, by keeping the people with top scores. The score is based in person area over the image, body part score, as well as joint score (between each pair of connected body parts). Useful if you know the exact number of people in the scene, so it can remove false positives (if all the people have been detected. However, it might also include false negatives by removing very small or highly occluded people. -1 will keep them all.”);
  • DEFINE_bool(maximize_positives, false, “It reduces the thresholds to accept a person candidate. It highly increases both false and true positives. I.e., it maximizes average recall but could harm average precision.”);
  • DEFINE_double(fps_max, -1., “Maximum processing frame rate. By default (-1), OpenPose will process frames as fast as possible. Example usage: If OpenPose is displaying images too quickly, this can reduce the speed so the user can analyze better each frame from the GUI.”);
  1. OpenPose Body Pose
  • DEFINE_int32(body, 1, “Select 0 to disable body keypoint detection (e.g., for faster but less accurate face keypoint detection, custom hand detector, etc.), 1 (default) for body keypoint estimation, and 2 to disable its internal body pose estimation network but still still run the greedy association parsing algorithm”);
  • DEFINE_string(model_pose, “BODY_25”, “Model to be used. E.g., BODY_25 (fastest for CUDA version, most accurate, and includes foot keypoints), COCO (18 keypoints), MPI (15 keypoints, least accurate model but fastest on CPU), MPI_4_layers (15 keypoints, even faster but less accurate).”);
  • DEFINE_string(net_resolution, “-1x368”, “Multiples of 16. If it is increased, the accuracy potentially increases. If it is decreased, the speed increases. For maximum speed-accuracy balance, it should keep the closest aspect ratio possible to the images or videos to be processed. Using -1 in any of the dimensions, OP will choose the optimal aspect ratio depending on the user’s input value. E.g., the default -1x368 is equivalent to 656x368 in 16:9 resolutions, e.g., full HD (1980x1080) and HD (1280x720) resolutions.”);
  • DEFINE_int32(scale_number, 1, “Number of scales to average.”);
  • DEFINE_double(scale_gap, 0.25, “Scale gap between scales. No effect unless scale_number > 1. Initial scale is always 1. If you want to change the initial scale, you actually want to multiply the net_resolution by your desired initial scale.”);
  • DEFINE_double(upsampling_ratio, 0., “Upsampling ratio between the net_resolution and the output net results. A value less or equal than 0 (default) will use the network default value (recommended).”);
  1. OpenPose Body Pose Heatmaps and Part Candidates
  • DEFINE_bool(heatmaps_add_parts, false, “If true, it will fill op::Datum::poseHeatMaps array with the body part heatmaps, and analogously face & hand heatmaps to op::Datum::faceHeatMaps & op::Datum::handHeatMaps. If more than one add_heatmaps_X flag is enabled, it will place then in sequential memory order: body parts + bkg + PAFs. It will follow the order on POSE_BODY_PART_MAPPING in src/openpose/pose/poseParameters.cpp. Program speed will considerably decrease. Not required for OpenPose, enable it only if you intend to explicitly use this information later.”);
  • DEFINE_bool(heatmaps_add_bkg, false, “Same functionality as add_heatmaps_parts, but adding the heatmap corresponding to background.”);
  • DEFINE_bool(heatmaps_add_PAFs, false, “Same functionality as add_heatmaps_parts, but adding the PAFs.”);
  • DEFINE_int32(heatmaps_scale, 2, “Set 0 to scale op::Datum::poseHeatMaps in the range [-1,1], 1 for [0,1]; 2 for integer rounded [0,255]; and 3 for no scaling.”);
  • DEFINE_bool(part_candidates, false, “Also enable write_json in order to save this information. If true, it will fill the op::Datum::poseCandidates array with the body part candidates. Candidates refer to all the detected body parts, before being assembled into people. Note that the number of candidates is equal or higher than the number of final body parts (i.e., after being assembled into people). The empty body parts are filled with 0s. Program speed will slightly decrease. Not required for OpenPose, enable it only if you intend to explicitly use this information.”);
  1. OpenPose Face
  • DEFINE_bool(face, false, “Enables face keypoint detection. It will share some parameters from the body pose, e.g. model_folder. Note that this will considerable slow down the performance and increse the required GPU memory. In addition, the greater number of people on the image, the slower OpenPose will be.”);
  • DEFINE_int32(face_detector, 0, “Kind of face rectangle detector. Select 0 (default) to select OpenPose body detector (most accurate one and fastest one if body is enabled), 1 to select OpenCV face detector (not implemented for hands), 2 to indicate that it will be provided by the user, or 3 to also apply hand tracking (only for hand). Hand tracking might improve hand keypoint detection for webcam (if the frame rate is high enough, i.e., >7 FPS per GPU) and video. This is not person ID tracking, it simply looks for hands in positions at which hands were located in previous frames, but it does not guarantee the same person ID among frames.”);
  • DEFINE_string(face_net_resolution, “368x368”, “Multiples of 16 and squared. Analogous to net_resolution but applied to the face keypoint detector. 320x320 usually works fine while giving a substantial speed up when multiple faces on the image.”);
  1. OpenPose Hand
  • DEFINE_bool(hand, false, “Enables hand keypoint detection. It will share some parameters from the body pose, e.g. model_folder. Analogously to --face, it will also slow down the performance, increase the required GPU memory and its speed depends on the number of people.”);
  • DEFINE_int32(hand_detector, 0, “Kind of hand rectangle detector. Analogous to --face_detector.”);
  • DEFINE_string(hand_net_resolution, “368x368”, “Multiples of 16 and squared. Analogous to net_resolution but applied to the hand keypoint detector.”);
  • DEFINE_int32(hand_scale_number, 1, “Analogous to scale_number but applied to the hand keypoint detector. Our best results were found with hand_scale_number = 6 and hand_scale_range = 0.4.”);
  • DEFINE_double(hand_scale_range, 0.4, “Analogous purpose than scale_gap but applied to the hand keypoint detector. Total range between smallest and biggest scale. The scales will be centered in ratio 1. E.g., if scaleRange = 0.4 and scalesNumber = 2, then there will be 2 scales, 0.8 and 1.2.”);
  1. OpenPose 3-D Reconstruction
  • DEFINE_bool(3d, false, “Running OpenPose 3-D reconstruction demo: 1) Reading from a stereo camera system. 2) Performing 3-D reconstruction from the multiple views. 3) Displaying 3-D reconstruction results. Note that it will only display 1 person. If multiple people is present, it will fail.”);
  • DEFINE_int32(3d_min_views, -1, “Minimum number of views required to reconstruct each keypoint. By default (-1), it will require max(2, min(4, #cameras-1)) cameras to see the keypoint in order to reconstruct it.”);
  • DEFINE_int32(3d_views, -1, “Complementary option for --image_dir or --video. OpenPose will read as many images per iteration, allowing tasks such as stereo camera processing (--3d). Note that --camera_parameter_path must be set. OpenPose must find as many xml files in the parameter folder as this number indicates.”);
  1. Extra algorithms
  • DEFINE_bool(identification, false, “Experimental, not available yet. Whether to enable people identification across frames.”);
  • DEFINE_int32(tracking, -1, “Experimental, not available yet. Whether to enable people tracking across frames. The value indicates the number of frames where tracking is run between each OpenPose keypoint detection. Select -1 (default) to disable it or 0 to run simultaneously OpenPose keypoint detector and tracking for potentially higher accurary than only OpenPose.”);
  • DEFINE_int32(ik_threads, 0, “Experimental, not available yet. Whether to enable inverse kinematics (IK) from 3-D keypoints to obtain 3-D joint angles. By default (0 threads), it is disabled. Increasing the number of threads will increase the speed but also the global system latency.”);
  1. OpenPose Rendering
  • DEFINE_int32(part_to_show, 0, “Prediction channel to visualize: 0 (default) for all the body parts, 1 for the background heat map, 2 for the superposition of heatmaps, 3 for the superposition of PAFs, 4-(4+#keypoints) for each body part heat map, the following ones for each body part pair PAF.”);
  • DEFINE_bool(disable_blending, false, “If enabled, it will render the results (keypoint skeletons or heatmaps) on a black background, instead of being rendered into the original image. Related: part_to_show, alpha_pose, and alpha_pose.”);
  1. OpenPose Rendering Pose
  • DEFINE_double(render_threshold, 0.05, “Only estimated keypoints whose score confidences are higher than this threshold will be rendered. Note: Rendered refers only to visual display in the OpenPose basic GUI, not in the saved results. Generally, a high threshold (> 0.5) will only render very clear body parts; while small thresholds (~0.1) will also output guessed and occluded keypoints, but also more false positives (i.e., wrong detections).”);
  • DEFINE_int32(render_pose, -1, “Set to 0 for no rendering, 1 for CPU rendering (slightly faster), and 2 for GPU rendering (slower but greater functionality, e.g., alpha_X flags). If -1, it will pick CPU if CPU_ONLY is enabled, or GPU if CUDA is enabled. If rendering is enabled, it will render both outputData and cvOutputData with the original image and desired body part to be shown (i.e., keypoints, heat maps or PAFs).”);
  • DEFINE_double(alpha_pose, 0.6, “Blending factor (range 0-1) for the body part rendering. 1 will show it completely, 0 will hide it. Only valid for GPU rendering.”);
  • DEFINE_double(alpha_heatmap, 0.7, “Blending factor (range 0-1) between heatmap and original frame. 1 will only show the heatmap, 0 will only show the frame. Only valid for GPU rendering.”);
  1. OpenPose Rendering Face
  • DEFINE_double(face_render_threshold, 0.4, “Analogous to render_threshold, but applied to the face keypoints.”);
  • DEFINE_int32(face_render, -1, “Analogous to render_pose but applied to the face. Extra option: -1 to use the same configuration that render_pose is using.”);
  • DEFINE_double(face_alpha_pose, 0.6, “Analogous to alpha_pose but applied to face.”);
  • DEFINE_double(face_alpha_heatmap, 0.7, “Analogous to alpha_heatmap but applied to face.”);
  1. OpenPose Rendering Hand
  • DEFINE_double(hand_render_threshold, 0.2, “Analogous to render_threshold, but applied to the hand keypoints.”);
  • DEFINE_int32(hand_render, -1, “Analogous to render_pose but applied to the hand. Extra option: -1 to use the same configuration that render_pose is using.”);
  • DEFINE_double(hand_alpha_pose, 0.6, “Analogous to alpha_pose but applied to hand.”);
  • DEFINE_double(hand_alpha_heatmap, 0.7, “Analogous to alpha_heatmap but applied to hand.”);
  1. Display
  • DEFINE_bool(fullscreen, false, “Run in full-screen mode (press f during runtime to toggle).”);
  • DEFINE_bool(no_gui_verbose, false, “Do not write text on output images on GUI (e.g., number of current frame and people). It does not affect the pose rendering.”);
  • DEFINE_int32(display, -1, “Display mode: -1 for automatic selection; 0 for no display (useful if there is no X server and/or to slightly speed up the processing if visual output is not required); 2 for 2-D display; 3 for 3-D display (if --3d enabled); and 1 for both 2-D and 3-D display.”);
  1. Command Line Inteface Verbose
  • DEFINE_double(cli_verbose, -1.f, “If -1, it will be disabled (default). If it is a positive integer number, it will print on the command line every verbose frames. If number in the range (0,1), it will print the progress every verbose times the total of frames.”);
  1. Result Saving
  • DEFINE_string(write_images, “”, “Directory to write rendered frames in write_images_format image format.”);
  • DEFINE_string(write_images_format, “png”, “File extension and format for write_images, e.g., png, jpg or bmp. Check the OpenCV function cv::imwrite for all compatible extensions.”);
  • DEFINE_string(write_video, “”, “Full file path to write rendered frames in motion JPEG video format. It might fail if the final path does not finish in .avi. It internally uses cv::VideoWriter. Flag write_video_fps controls FPS. Alternatively, the video extension can be .mp4, resulting in a file with a much smaller size and allowing --write_video_with_audio. However, that would require: 1) Ubuntu or Mac system, 2) FFmpeg library installed (sudo apt-get install ffmpeg), 3) the creation temporarily of a folder with the same file path than the final video (without the extension) to storage the intermediate frames that will later be used to generate the final MP4 video.”);
  • DEFINE_double(write_video_fps, -1., “Frame rate for the recorded video. By default, it will try to get the input frames producer frame rate (e.g., input video or webcam frame rate). If the input frames producer does not have a set FPS (e.g., image_dir or webcam if OpenCV not compiled with its support), set this value accordingly (e.g., to the frame rate displayed by the OpenPose GUI).”);
  • DEFINE_bool(write_video_with_audio, false, “If the input is video and the output is so too, it will save the video with audio. It requires the output video file path finishing in .mp4 format (see write_video for details).”);
  • DEFINE_string(write_video_3d, “”, “Analogous to --write_video, but applied to the 3D output.”);
  • DEFINE_string(write_video_adam, “”, “Experimental, not available yet. Analogous to --write_video, but applied to Adam model.”);
  • DEFINE_string(write_json, “”, “Directory to write OpenPose output in JSON format. It includes body, hand, and face pose keypoints (2-D and 3-D), as well as pose candidates (if --part_candidates enabled).”);
  • DEFINE_string(write_coco_json, “”, “Full file path to write people pose data with JSON COCO validation format. If foot, face, hands, etc. JSON is also desired (--write_coco_json_variants), they are saved with different file name suffix.”);
  • DEFINE_int32(write_coco_json_variants, 1, “Add 1 for body, add 2 for foot, 4 for face, and/or 8 for hands. Use 0 to use all the possible candidates. E.g., 7 would mean body+foot+face COCO JSON.”);
  • DEFINE_int32(write_coco_json_variant, 0, “Currently, this option is experimental and only makes effect on car JSON generation. It selects the COCO variant for cocoJsonSaver.”);
  • DEFINE_string(write_heatmaps, “”, “Directory to write body pose heatmaps in PNG format. At least 1 add_heatmaps_X flag must be enabled.”);
  • DEFINE_string(write_heatmaps_format, “png”, “File extension and format for write_heatmaps, analogous to write_images_format. For lossless compression, recommended png for integer heatmaps_scale and float for floating values. See doc/output.md for more details.”);
  • DEFINE_string(write_keypoint, “”, “(Deprecated, use write_json) Directory to write the people pose keypoint data. Set format with write_keypoint_format.”);
  • DEFINE_string(write_keypoint_format, “yml”, “(Deprecated, use write_json) File extension and format for write_keypoint: json, xml, yaml & yml. Json not available for OpenCV < 3.0, use write_json instead.”);
  1. Result Saving - Extra Algorithms
  • DEFINE_string(write_bvh, “”, “Experimental, not available yet. E.g., ~/Desktop/mocapResult.bvh.”);
  1. UDP Communication
  • DEFINE_string(udp_host, “”, “Experimental, not available yet. IP for UDP communication. E.g., 192.168.0.1.”);
  • DEFINE_string(udp_port, “8051”, “Experimental, not available yet. Port number for UDP communication.”);
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值