python人脸检测过滤_OpenCV/TensorFlow人脸关键点检测与实时过滤

此项目旨在构建一个端到端的人脸识别系统,使用计算机视觉和深度学习技术来检测面部关键点。项目分为三部分:1) 使用OpenCV进行预处理和面部检测;2) 训练卷积神经网络(CNN)以检测面部关键点;3) 结合前两部分在任意图像上实现面部关键点识别。项目还提供了可选扩展,使系统能在视频中实现实时脸部滤镜。
摘要由CSDN通过智能技术生成

AIND Term II, Computer Vision Capstone Project

Facial Keypoint Detection and Real-time Filtering

Project Overview

Welcome to the Computer Vision capstone project in the AI Nanodegree program! In this project, you’ll combine your knowledge of computer vision techniques and deep learning to build and end-to-end facial keypoint recognition system. Facial keypoints include points around the eyes, nose, and mouth on any face and are used in many applications, from facial tracking to emotion recognition. Your completed code should be able to take in any image containing faces and identify the location of each face and their facial keypoints, as shown below.

The project will be broken up into a few main parts in one Python notebook:

Part 1 : Investigating OpenCV, pre-processing, and face detection

Part 2 : Training a Convolutional Neural Network (CNN) to detect facial keypoints

Part 3 : Putting parts 1 and 2 together to identify facial keypoints on any image!

You'll also be given optional exercises that allow you to extend this project so that it works on video and allows you to implement fun face filters in real-time!

Project Instructions

All of the starting code and resources you'll need to compete this project are in a Github repo! Before you can get stared coding, you'll have to make sure that you have all the libraries and dependencies required to support this project.

Amazon Web Services

This project requires GPU acceleration to run efficiently. Please refer to the Udacity instructions for setting up a GPU instance for this project, and refer to the project instructions in the classroom for setup. link for AIND students

Local Environment Instructions

You should follow the AWS instructions in your classroom for best results.

Clone the repository, and navigate to the downloaded folder.

git clone https://github.com/udacity/AIND-CV-FacialKeypoints.git

cd AIND-CV-FacialKeypoints

Create (and activate) a new environment with Python 3.5 and the numpy package.

Linux or Mac: conda create --name aind-cv python=3.5 numpy

source activate aind-cv

Windows: conda create --name aind-cv python=3.5 numpy scipy

activate aind-cv

Install/Update TensorFlow (for this project, you may use CPU only).

Option 1: To install TensorFlow with GPU support, follow the guide to install the necessary NVIDIA software on your system. If you are using the Udacity AMI, you can skip this step and only need to install the tensorflow-gpu package: pip install tensorflow-gpu==1.1.0

Option 2: To install TensorFlow with CPU support only: pip install tensorflow==1.1.0

Install/Update Keras.

pip install keras -U

Switch Keras backend to TensorFlow.

Linux or Mac: KERAS_BACKEND=tensorflow python -c "from keras import backend"

Windows: set KERAS_BACKEND=tensorflow

python -c "from keras import backend"

Install a few required pip packages (including OpenCV).

pip install -r requirements.txt

Data

All of the data you'll need to train a neural network is in the AIND-CV-FacialKeypoints repo, in the subdirectory data. In this folder are a zipped training and test set of data.

Navigate to the data directory

cd data

Unzip the training and test data (in that same location). If you are in Windows, you can download this data and unzip it by double-clicking the zipped files. In Mac, you can use the terminal commands below.

unzip training.zip

unzip test.zip

You should be left with two .csv files of the same name. You may delete the zipped files.

Troubleshooting: If you are having trouble unzipping this data, you can download that same training and test data on Kaggle.

Now, with that data unzipped, you should have everything you need!

Notebook

Navigate back to the repo. (Also your source environment should still be activated at this point)

cd

cd AIND-CV-FacialKeypoints

Open the notebook and follow the instructions.

jupyter notebook CV_project.ipynb

NOTE: While some code has already been implemented to get you started, you will need to implement additional functionality to successfully answer all of the questions included in the notebook. Unless requested, do not modify code that has already been included.

Evaluation

Your project will be reviewed by a Udacity reviewer against the Computer Vision project rubric. Review this rubric thoroughly, and self-evaluate your project before submission. All criteria found in the rubric must meet specifications for you to pass.

Project Submission

When you are ready to submit your project, collect the following files and compress them into a single zip archive for upload:

The CV_project.ipynb file with fully functional code. All code cells should be executed and displaying output, and all questions should be answered.

An HTML or PDF export of the project notebook with the name report.html or report.pdf.

Any additional images used for the project that were not supplied to you for the project.

Please do not include the project data sets in the data/ folder. They are too big and only your executed notebook code and text will be evaluated.

Alternatively, your submission could consist of only the GitHub link to your repository.

Project Rubric

Files Submitted

Criteria

Meets Specifications

Submission Files

CV_project.ipynb--> all completed python functions requested in the main notebook CV_project.ipynb TODO items should be completed.

Step 1: Add eye detections to the face detection setup

Criteria

Meets Specifications

Add eye detections to the current face detection setup.

The submission returns proper code detecting and marking eyes in the given test image.

Step 2: De-noise an image for better face detection

Criteria

Meets Specifications

De-noise an image for better face detection.

The submission completes de-noising of the given noisy test image with perfect face detections then performed on the cleaned image.

Step 3: Blur and edge detect an image

Criteria

Meets Specifications

Blur and edge detect a test image.

The submission returns an edge-detected image that has first been blurred, then edge-detected, using the specified parameters.

Step 4: Automatically hide the identity of a person (blur a face)

Criteria

Meets Specifications

Find and blur the face of an individual in a test image.

The submission should provide code to automatically detect the face of a person in a test image, then blur their face to mask their identity.

Step 5: Specify the network architecture

Criteria

Meets Specifications

Specify a convolutional network architecture for learning correspondence between input faces and facial keypoints.

The submission successfully provides code to build an appropriate convolutional network.

Step 6: Compile and train the model

Criteria

Meets Specifications

Compile and train your convnet.

The submission successfully compiles and trains their convnet.

Step 7: Answer a few questions and visualize the loss

Criteria

Meets Specifications

Answer a few questions about your training and visualize the loss function.

The submission successfully discusses any potential issues with their training, and answers all of the provided questions.

Step 8: Complete a facial keypoints detector and complete the CV pipeline

Criteria

Meets Specifications

Combine OpenCV face detection with your trained convnet facial keypoint detector.

The submission successfully combines OpenCV's face detection with their trained convnet keypoint detector.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值