ubuntu安装mediapipe-python、dlib

一、mediapipe

(77条消息) Win10和Jetson Nano环境下安装Mediapipe-python_@_@学到头晕的博客-CSDN博客_mediapipe安装

jiuqiant/mediapipe_python_aarch64 (github.com)

注意:

python版本要和.whl文件的python版本对应

思考:

能否直接拿别人对应的python版本安装

遇到的问题:

以下都是问题ubuntu18.04,20.04按照上面网址操作是行得通的,能按照mediapipe

(编译时报错) $python3 setup.py gen_protos && python3 setup.py bdist_wheel

/usr/local/lib/python3.6/dist-packages/setuptools/dist.py:476: UserWarning: The version specified ('dev') is an invalid version, this may not work as expected with newer versions of setuptools, pip, and PyPI. Please see PEP 440 for more details.
"details." % version
running gen_protos
generating proto file: mediapipe/framework/thread_pool_executor_pb2.py
Missing output directives

未解决。

新的方法:jetson nano配置mediapipe,无需编译

(48条消息) Jetson Nano 配置 Mediapipe(无编译)_Loading_create的博客-CSDN博客

新的问题:

一、./setup_opencv.sh报错

1.无法下载git包,点开.sh文件,手动下载压缩,看.sh文件放在哪个路径位置;要十分注意.sh里那几行代码,mkdir,cd,git语句

2.fatal: not a git repository (or any of the parent directories): .git:将git 改成git init

二、./v0.8.5/numpy119x/mediapipe-0.8.5_cuda102-cp36-cp36m-linux_aarch64_numpy119x_jetsonnano_L4T32.5.1_download.sh报错:

v0.8.5文件下没文件只有一个download.sh,此时要在该目录下 ./download.sh会得到一个mediapipe-0.8.5_cuda102-cp36-none-linux_aarch64.whl,最后只要python3 -m pip install mediapipe/dist/mediapipe-0.8.5_cuda102-cp36-cp36m-linux_aarch64.whl就可以成功安装mediapipe了

下面网站提供了包(whl文件):

(6条消息) 零基础入门Jetson Nano——MediaPipe双版本(CPU+GPU)的安装与使用_哒哒️的博客-CSDN博客_mediapipe安装

二、dlib

(77条消息) ubuntu上安装dlib_青春须早为,岂能长少年的博客-CSDN博客

ubuntu系统的一个注意点:

系统文件是在other locations的computer

下面是使用 OpenCV 和 dlib 库实现截取眼睛和嘴巴的 Python 代码: ```python import cv2 import dlib # 加载模型 detector = dlib.get_frontal_face_detector() predictor = dlib.shape_predictor('shape_predictor_68_face_landmarks.dat') # 加载图像 img = cv2.imread('test.jpg') gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY) # 检测人脸 faces = detector(gray) # 遍历每张脸并截取眼睛和嘴巴 for face in faces: # 获取关键点 landmarks = predictor(gray, face) # 截取左眼 left_eye_pts = [(landmarks.part(36).x, landmarks.part(36).y), (landmarks.part(37).x, landmarks.part(37).y), (landmarks.part(38).x, landmarks.part(38).y), (landmarks.part(39).x, landmarks.part(39).y), (landmarks.part(40).x, landmarks.part(40).y), (landmarks.part(41).x, landmarks.part(41).y)] left_eye_mask = np.zeros(img.shape[:2], dtype=np.uint8) cv2.drawContours(left_eye_mask, [np.array(left_eye_pts)], -1, (255, 255, 255), -1, cv2.LINE_AA) left_eye = cv2.bitwise_and(img, img, mask=left_eye_mask) # 截取右眼 right_eye_pts = [(landmarks.part(42).x, landmarks.part(42).y), (landmarks.part(43).x, landmarks.part(43).y), (landmarks.part(44).x, landmarks.part(44).y), (landmarks.part(45).x, landmarks.part(45).y), (landmarks.part(46).x, landmarks.part(46).y), (landmarks.part(47).x, landmarks.part(47).y)] right_eye_mask = np.zeros(img.shape[:2], dtype=np.uint8) cv2.drawContours(right_eye_mask, [np.array(right_eye_pts)], -1, (255, 255, 255), -1, cv2.LINE_AA) right_eye = cv2.bitwise_and(img, img, mask=right_eye_mask) # 截取嘴巴 mouth_pts = [(landmarks.part(48).x, landmarks.part(48).y), (landmarks.part(49).x, landmarks.part(49).y), (landmarks.part(50).x, landmarks.part(50).y), (landmarks.part(51).x, landmarks.part(51).y), (landmarks.part(52).x, landmarks.part(52).y), (landmarks.part(53).x, landmarks.part(53).y), (landmarks.part(54).x, landmarks.part(54).y), (landmarks.part(55).x, landmarks.part(55).y), (landmarks.part(56).x, landmarks.part(56).y), (landmarks.part(57).x, landmarks.part(57).y), (landmarks.part(58).x, landmarks.part(58).y), (landmarks.part(59).x, landmarks.part(59).y), (landmarks.part(60).x, landmarks.part(60).y), (landmarks.part(61).x, landmarks.part(61).y), (landmarks.part(62).x, landmarks.part(62).y), (landmarks.part(63).x, landmarks.part(63).y), (landmarks.part(64).x, landmarks.part(64).y), (landmarks.part(65).x, landmarks.part(65).y), (landmarks.part(66).x, landmarks.part(66).y), (landmarks.part(67).x, landmarks.part(67).y)] mouth_mask = np.zeros(img.shape[:2], dtype=np.uint8) cv2.drawContours(mouth_mask, [np.array(mouth_pts)], -1, (255, 255, 255), -1, cv2.LINE_AA) mouth = cv2.bitwise_and(img, img, mask=mouth_mask) # 显示结果 cv2.imshow('Left Eye', left_eye) cv2.imshow('Right Eye', right_eye) cv2.imshow('Mouth', mouth) cv2.waitKey(0) cv2.destroyAllWindows() ``` 在上面的代码中,我们首先加载了训练好的人脸检测器和面部关键点检测器。然后,我们加载了一张图像,并将其转换为灰度图像。接下来,我们使用人脸检测器检测出图像中的所有人脸,并使用面部关键点检测器获取每个人脸的面部关键点。最后,我们根据面部关键点截取出眼睛和嘴巴的部分,并在窗口中显示结果。
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值