c++大作业

题目:
设计一个C++软件,达到以下功能:
a) 设计一个http server,可以获取前端的http request,并能够根据不同request类型处理不同任务请求回复,并达到一定的并发能力;
b) http server实现人脸检测,人脸关键点检测,手势关键点检测,人体关键的检测四个算法的功能实现;
c) 使用作业2中的无锁队列和线程池作为并发任务管理的基础模块;
d) 使用任意编程语言在任意平台(PC/web/手机)实现简易前端,通过前端发送图片与任务类型,从server获取结果后将结果显示在前端界面。

注意事项:
a) 2人或3人一组合作完成本作业,需要在git commit中体现每个人的工作内容;
b) 在添加链接描述 建立自己的小组分支,所有代码通过git管理;
c) 最终git 应包括设计流程图,前后端代码,可以成功编译的脚本,以及包含所有测试功能demo视频;
d) 注意代码风格和代码可读性。

参考资料:
a) C++ http server: 添加链接描述
b) OpenCV人脸检测和人脸关键点检测:
添加链接描述
手势和人体关键点检测:
添加链接描述

#include "workflow/WFFacilities.h"
#include <csignal>
#include "wfrest/HttpServer.h"
#include "wfrest/PathUtil.h"
#include <opencv2/opencv.hpp>
#include <opencv2/face.hpp>
#include "drawLandmarks.hpp"
#include <opencv2/dnn.hpp>
#include <opencv2/imgproc.hpp>
#include <opencv2/highgui.hpp>


using namespace cv::dnn;
using namespace std;
using namespace cv;
using namespace cv::face;
using namespace wfrest;

const int POSE_PAIRS[3][20][2] = {
        {   // COCO body
                {1, 2}, {1, 5}, {2, 3},
                                        {3, 4}, {5, 6}, {6, 7},
                                                                {1, 8}, {8, 9}, {9, 10},
                                                                                        {1, 11}, {11, 12}, {12, 13},
                {1, 0},  {0,  14},
                                   {14, 16}, {0,  15}, {15, 17}
        },
        {   // hand
                {0, 1}, {1, 2}, {2, 3}, {3, 4},         // thumb
                                                {0, 5}, {5, 6}, {6, 7}, {7, 8},         // pinkie
                                                                                {0, 9}, {9, 10}, {10, 11}, {11, 12},    // middle
                {0, 13}, {13, 14}, {14, 15}, {15, 16},  // ring
                                                       {0,  17}, {17, 18}, {18, 19}, {19, 20}   // small
        }};

std::string img_name = "index.jpg";

static WFFacilities::WaitGroup wait_group(1);

void sig_handler(int signo) {
    wait_group.done();
}

int main() {
    signal(SIGINT, sig_handler);

    HttpServer svr;

    // single files
    // svr.GET("/file1", [](const HttpReq *req, HttpResp *resp)
    // {
    //     resp->File("index.jpg");
    // });

    svr.GET("/img_path", [](const HttpReq *req, HttpResp *resp) {
        resp->File("index.html");
        // const std::string& user_name = req->query("a");
        // std::string & q=const_cast<std::string&> (user_name);
        // int p=atoi(q.c_str());
        // p++;
        // std::string user=std::to_string(p);
        // if(req->has_query("a")){
        //     fprintf(stderr,"has account\n");
        // }
        // // resp->File(index.jpg);
        // resp->String(user);
// //        const std::string& info = req->query("info"); // no this field
// //        const std::string& address = req->default_query("address", "china");
// //        resp->String(user_name + " " + password + " " + info + " " + address + "\n");
        // resp->String(user_name);
//         MultiPartEncoder encoder;
//         encoder.add_param("Filename", user_name);

//         resp->String(std::move(encoder));
//         resp->File("test.html");
    });

    // The request responds to a url matching:  /query_list?username=chanchann&password=yyy
    // svr.GET("/query_list", [](const HttpReq *req, HttpResp *resp)
    // {
    //     const std::map<std::string, std::string>& query_list = req->query_list();
    //     for (auto &query: query_list)
    //     {
    //         fprintf(stderr, "%s : %s\n", query.first.c_str(), query.second.c_str());
    //     }
    // });

    // // svr.GET("/img_path/form1",[](const HttpReq *req, HttpResp *resp){
    // //     const std::string& accout = req->query("a");
    // //     if(req->has_query("a")){
    // //         fprintf(stderr,"has account\n");
    // //     }
    // // });

    // // The request responds to a url matching:  /query?username=chanchann&password=yyy
    // svr.GET("/query", [](const HttpReq *req, HttpResp *resp)
    // {
    //     const std::string& user_name = req->query("username");
    //     const std::string& password = req->query("password");
    //     const std::string& info = req->query("info"); // no this field
    //     const std::string& address = req->default_query("address", "china");
    //     resp->String(user_name + " " + password + " " + info + " " + address + "\n");
    // });

    // The request responds to a url matching:  /query_has?username=chanchann&password=
    // The logic for judging whether a parameter exists is that if the parameter value is empty, the parameter is considered to exist
    // and the parameter does not exist unless the parameter is submitted.
    // svr.GET("/query_has", [](const HttpReq *req, HttpResp *resp)
    // {
    //     if (req->has_query("password"))
    //     {
    //         fprintf(stderr, "has password query\n");
    //     }
    //     if (req->has_query("info"))
    //     {
    //         fprintf(stderr, "has info query\n");
    //     }
    // });

    // svr.Static("/", "./AR-iphone.png");


    svr.POST("img_path/upload_fix", [](const HttpReq *req, HttpResp *resp) {
        Form &form = req->form();

        if (form.empty()) {
            resp->set_status(HttpStatusBadRequest);
        } else {
            for (auto &part: form) {
                const std::string &name = part.first;
                // filename : filecontent
                std::pair<std::string, std::string> &fileinfo = part.second;
                // file->filename SHOULD NOT be trusted. See Content-Disposition on MDN
                // https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Disposition#directives
                // The filename is always optional and must not be used blindly by the application:
                // path information should be stripped, and conversion to the server file system rules should be done.
                if (fileinfo.first.empty()) {
                    continue;
                }
                fprintf(stderr, "filename : %s\n", fileinfo.first.c_str());
                img_name = fileinfo.first.c_str();
                // simple solution to fix the problem above
                // This will restrict the upload file to current directory.
                resp->Save(PathUtil::base(fileinfo.first), std::move(fileinfo.second));
                // std::string& body = req->body();
                // resp->Save("img_name",std::move(body));
            }
        }


        // 如果按下ESC键, 则退出程序
        // if (waitKey(1) == 27) break;

        // }

        // resp->File("index.png");
        // resp.
    });

    svr.POST("img_path/face", [](const HttpReq *req, HttpResp *resp) {

        // 加载人脸检测器(Face Detector)
        // [1]Haar Face Detector
        //CascadeClassifier faceDetector("haarcascade_frontalface_alt2.xml");
        // [2]LBP Face Detector
        CascadeClassifier faceDetector(
                "/home/gaochencheng/repo/c_language/task/gaochencheng/faceL/lbpcascade_frontalface_improved.xml");

        // 创建Facemark类的对象
        Ptr<Facemark> facemark = FacemarkLBF::create();

        // 加载人脸检测器模型
        facemark->loadModel("/home/gaochencheng/repo/c_language/task/gaochencheng/faceL/lbfmodel.yaml");

        // 设置网络摄像头用来捕获视频
        // VideoCapture cam(4);
        String image = "/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/" + img_name;
        Mat frame = imread(image);

        // 存储视频帧和灰度图的变量
        Mat gray;

        // 读取帧

        // if(!frame.empty())
        // {

        // 存储人脸矩形框的容器
        vector<Rect> faces;
        // 将视频帧转换至灰度图, 因为Face Detector的输入是灰度图
        cvtColor(frame, gray, COLOR_BGR2GRAY);

        // 人脸检测
        faceDetector.detectMultiScale(gray, faces);

        // 人脸关键点的容器
        vector<vector<Point2f> > landmarks;

        // 运行人脸关键点检测器(landmark detector)
        bool success = facemark->fit(frame, faces, landmarks);

        if (success) {
            // 如果成功, 在视频帧上绘制关键点
            Point eye_left;
            Point eye_right;
            for (unsigned long i = 0; i < faces.size(); i++) {
                eye_left = landmarks[i][36];
                eye_right = landmarks[i][45];
                rectangle(frame, faces[i], Scalar(255, 0, 0));
                for (int i = 0; i < landmarks.size(); i++)

                    for (int i = 0; i < landmarks.size(); i++) {
                        // 自定义绘制人脸特征点函数, 可绘制人脸特征点形状/轮廓
                        drawLandmarks(frame, landmarks[i]);
                        // OpenCV自带绘制人脸关键点函数: drawFacemarks
                        drawFacemarks(frame, landmarks[i], Scalar(0, 0, 255));
                    }

            }

        }


        // 显示结果
        // imshow("Facial Landmark Detection", frame);
        // waitKey(0);
        imwrite("/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/facial-landmark.jpg", frame);
        string img_face_name = "facial-landmark.jpg";
        resp->File(img_face_name);
    });


    svr.POST("img_path/body", [](const HttpReq *req, HttpResp *resp) {


                 String modelTxt = "/home/gaochencheng/repo/c_language/task/gaochencheng/hw_models/openpose_pose_coco.prototxt";


                 String modelBin = "/home/gaochencheng/repo/c_language/task/gaochencheng/hw_models/pose_iter_440000.caffemodel";
                 String imageFile = "/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/" + img_name;
                 String dataset = "COCO";


                 int W_in = 368;
                 int H_in = 368;
                 float thresh = 0.1;
                 float scale = 0.003922;

                 // if (parser.get<bool>("help") || modelTxt.empty() || modelBin.empty() || imageFile.empty())
                 // {
                 //     cout << "A sample app to demonstrate human or hand pose detection with a pretrained OpenPose dnn." << endl;
                 //     parser.printMessage();
                 //     return 0;
                 // }
                 int midx, npairs, nparts;
                 if (!dataset.compare("COCO")) {
                     midx = 0;
                     npairs = 17;
                     nparts = 18;
                 } else if (!dataset.compare("MPI")) {
                     midx = 1;
                     npairs = 14;
                     nparts = 16;
                 } else if (!dataset.compare("HAND")) {
                     midx = 2;
                     npairs = 20;
                     nparts = 22;
                 } else {
                     std::cerr << "Can't interpret dataset parameter: " << dataset << std::endl;
                     exit(-1);
                 }
                 // read the network model
                 Net net = readNet(modelBin, modelTxt);
                 // and the image
                 Mat img = imread(imageFile);
                 // Mat img1=imread("/home/gaochencheng/repo/c_language/task/gaochencheng/pose/human.jpg");
                 // imshow("string:",img1);
                 // cout<<imageFile<<endl;
                 if (img.empty()) {
                     std::cerr << "Can't read image from the file: " << imageFile << std::endl;
                     exit(-1);
                 }
                 // cout<<1<<endl;
                 // send it through the network
                 // imshow("now", img);
                 Mat inputBlob = blobFromImage(img, scale, Size(W_in, H_in), Scalar(0, 0, 0), false, false);
                 // imshow("resize:",inputBlob);
                 // cout<<2<<endl;
                 net.setInput(inputBlob);
                 // cout<<3<<endl;
                 Mat result = net.forward();
                 // cout<<4<<endl;
                 // the result is an array of "heatmaps", the probability of a body part being in location x,y

                 int H = result.size[2];
                 int W = result.size[3];
                 // find the position of the body parts
                 vector<Point> points(22);
                 for (int n = 0; n < nparts; n++) {
                     // Slice heatmap of corresponding body's part.
                     Mat heatMap(H, W, CV_32F, result.ptr(0, n));
                     // 1 maximum per heatmap
                     Point p(-1, -1), pm;
                     double conf;
                     minMaxLoc(heatMap, 0, &conf, 0, &pm);
                     if (conf > thresh)
                         p = pm;
                     points[n] = p;
                 }

                 // connect body parts and draw it !
                 float SX = float(img.cols) / W;
                 float SY = float(img.rows) / H;
                 for (int n = 0; n < npairs; n++) {
                     // lookup 2 connected body/hand parts
                     Point2f a = points[POSE_PAIRS[midx][n][0]];
                     Point2f b = points[POSE_PAIRS[midx][n][1]];

                     // we did not find enough confidence before
                     if (a.x <= 0 || a.y <= 0 || b.x <= 0 || b.y <= 0)
                         continue;

                     // scale to image size
                     a.x *= SX;
                     a.y *= SY;
                     b.x *= SX;
                     b.y *= SY;

                     line(img, a, b, Scalar(0, 200, 0), 2);
                     circle(img, a, 3, Scalar(0, 0, 200), -1);
                     circle(img, b, 3, Scalar(0, 0, 200), -1);
                 }
                 imwrite("/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/body-landmark.jpg", img);
                 string img_body_name = "body-landmark.jpg";
                 resp->File(img_body_name);
             }
    );

    svr.POST("img_path/hand", [](const HttpReq *req, HttpResp *resp) {


                 String modelTxt = "/home/gaochencheng/repo/c_language/task/gaochencheng/hw_models/pose_deploy.prototxt";


                 String modelBin = "/home/gaochencheng/repo/c_language/task/gaochencheng/hw_models/pose_iter_102000.caffemodel";
                 String imageFile = "/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/" + img_name;
                 String dataset = "HAND";


                 int W_in = 368;
                 int H_in = 368;
                 float thresh = 0.1;
                 float scale = 0.003922;

                 // if (parser.get<bool>("help") || modelTxt.empty() || modelBin.empty() || imageFile.empty())
                 // {
                 //     cout << "A sample app to demonstrate human or hand pose detection with a pretrained OpenPose dnn." << endl;
                 //     parser.printMessage();
                 //     return 0;
                 // }
                 int midx, npairs, nparts;
                 if (!dataset.compare("COCO")) {
                     midx = 0;
                     npairs = 17;
                     nparts = 18;
                 } else if (!dataset.compare("MPI")) {
                     midx = 1;
                     npairs = 14;
                     nparts = 16;
                 } else if (!dataset.compare("HAND")) {
                     midx = 2;
                     npairs = 20;
                     nparts = 22;
                 } else {
                     std::cerr << "Can't interpret dataset parameter: " << dataset << std::endl;
                     exit(-1);
                 }
                 // read the network model
                 Net net = readNet(modelBin, modelTxt);
                 // and the image
                 Mat img = imread(imageFile);
                 // Mat img1=imread("/home/gaochencheng/repo/c_language/task/gaochencheng/pose/human.jpg");
                 // imshow("string:",img1);
                 // cout<<imageFile<<endl;
                 if (img.empty()) {
                     std::cerr << "Can't read image from the file: " << imageFile << std::endl;
                     exit(-1);
                 }
                 // cout<<1<<endl;
                 // send it through the network
                 // imshow("now", img);
                 Mat inputBlob = blobFromImage(img, scale, Size(W_in, H_in), Scalar(0, 0, 0), false, false);
                 // imshow("resize:",inputBlob);
                 // cout<<2<<endl;
                 net.setInput(inputBlob);
                 // cout<<3<<endl;
                 Mat result = net.forward();
                 // cout<<4<<endl;
                 // the result is an array of "heatmaps", the probability of a body part being in location x,y

                 int H = result.size[2];
                 int W = result.size[3];
                 // find the position of the body parts
                 vector<Point> points(22);
                 for (int n = 0; n < nparts; n++) {
                     // Slice heatmap of corresponding body's part.
                     Mat heatMap(H, W, CV_32F, result.ptr(0, n));
                     // 1 maximum per heatmap
                     Point p(-1, -1), pm;
                     double conf;
                     minMaxLoc(heatMap, 0, &conf, 0, &pm);
                     if (conf > thresh)
                         p = pm;
                     points[n] = p;
                 }

                 // connect body parts and draw it !
                 float SX = float(img.cols) / W;
                 float SY = float(img.rows) / H;
                 for (int n = 0; n < npairs; n++) {
                     // lookup 2 connected body/hand parts
                     Point2f a = points[POSE_PAIRS[midx][n][0]];
                     Point2f b = points[POSE_PAIRS[midx][n][1]];

                     // we did not find enough confidence before
                     if (a.x <= 0 || a.y <= 0 || b.x <= 0 || b.y <= 0)
                         continue;

                     // scale to image size
                     a.x *= SX;
                     a.y *= SY;
                     b.x *= SX;
                     b.y *= SY;

                     line(img, a, b, Scalar(0, 200, 0), 2);
                     circle(img, a, 3, Scalar(0, 0, 200), -1);
                     circle(img, b, 3, Scalar(0, 0, 200), -1);
                 }
                 imwrite("/home/gaochencheng/repo/c_language/task/gaochencheng/task3/build/hand-landmark.jpg", img);
                 string img_hand_name = "hand-landmark.jpg";
                 resp->File(img_hand_name);
             }
    );

    // curl -X POST http://ip:port/form \
    // -F "file=@/path/file" \
    // -H "Content-Type: multipart/form-data"
    svr.POST("/form", [](const HttpReq *req, HttpResp *resp) {
        if (req->content_type() != MULTIPART_FORM_DATA) {
            resp->set_status(HttpStatusBadRequest);
            return;
        }
        /*
            // https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST
            // <name ,<filename, body>>
            using Form = std::map<std::string, std::pair<std::string, std::string>>;
        */
        const Form &form = req->form();
        for (auto &it: form) {
            auto &name = it.first;
            auto &file_info = it.second;
            fprintf(stderr, "%s : %s = %s",
                    name.c_str(),
                    file_info.first.c_str(),
                    file_info.second.c_str());
        }
        // resp->File("index.jpg");
    });

    if (svr.start(8888) == 0) {
        svr.list_routes();
        wait_group.wait();
        svr.stop();
    } else {
        fprintf(stderr, "Cannot start server");
        exit(1);
    }
    return 0;
}






// int main(int argc, char **argv) {

//     return 0;
// }
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值