ios目标检测实战(一)

OD ios


先读博客ios+opencv调用摄像头,完成基础操作

先读博客ios+opencv调用摄像头,完成基础操作

先读博客ios+opencv调用摄像头,完成基础操作

参考博客:

ios 获取工程内文件的路径:https://www.jianshu.com/p/a4935e6427ec

std::string和NSString互转:https://blog.csdn.net/zhangqiaoge/article/details/77678446

iOS开发将txt文件拉入xcode中 如何找到并且读取:https://www.jianshu.com/p/21326da86327

第一篇博客讲如何跑基础demo

效果:

按钮为闪光灯,对整体功能没有影响,博文不重点讲。

在这里插入图片描述

我们使用YOLO_lite模型参数,因为它能在手机上实时显示,图像采集使用opencv-ios自带的方法。

预训练模型使用的是coco数据集。

视频处理主要使用OC语言,不会没关系,和C++一模一样。

导入模型

我们在https://github.com/reu2018DL/YOLO-LITE下载YOLO_lite的预训练模型,主要为.cfg和.weight两个文件。

我们把文件拖入xcode项目中:

在这里插入图片描述

在配好ios+opencv基础环境后,我们的代码是这样子的:

在这里插入图片描述

核心代码

我们在回调函数processImage()中添加以下内容:

    Mat blob;
    
    Mat image_t;
    cvtColor(image, image_t, cv::COLOR_RGBA2RGB, 3);
    
    blobFromImage(image_t, blob, 1/255.0, cvSize(inpWidth, inpHeight), Scalar(0,0,0), true, false);

    net.setInput(blob);
    vector<Mat> outs;

    net.forward(outs, getOutputsNames(net));
    postprocess(image_t, outs);
    vector<double> layersTimes;
    double freq = getTickFrequency() / 1000;
    double t = net.getPerfProfile(layersTimes) / freq;
    string label = format("Inference time for a frame : %.2f ms", t);
    putText(image_t, label, cv::Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255));

    cvtColor(image_t, image, cv::COLOR_RGB2RGBA);

这些代码是把原先的cpp代码中摄像机while循环的部分更改为object-c的形式,几乎和cpp代码一模一样。

  • 先把输入图像(4通道)改为3通道,才能输入神经网络。

接下来把要用到的其他函数也进行修改后变成oc版本,我又写了个model_load函数用以获取预训练文件:

我直接贴完整代码了:

#import <opencv2/opencv.hpp>
#import <opencv2/imgproc/types_c.h>
#import <opencv2/imgcodecs/ios.h>
#import <opencv2/videoio/cap_ios.h>
#import "ViewController.h"


using namespace cv;
using namespace dnn;
using namespace std;

@interface ViewController ()<CvVideoCameraDelegate>
{
    Mat cvImage;
}
@property (weak, nonatomic) IBOutlet UISwitch *lightSwitch;
@property (weak, nonatomic) IBOutlet UIImageView *imageView;
@property(nonatomic,strong)CvVideoCamera * videoCamera;
@end

@implementation ViewController

//一些常量和函数声明
//void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame);
//void postprocess(Mat& frame, const vector<Mat>& out);
//void model_load();

static Net net;
static int inpWidth = 416;
static int inpHeight = 416;
static float confThreshold = 0.5;
static float nmsThreshold = 0.4;

//coco class
static string classes[]= {"person","bicycle","car", "motorbike","aeroplane","bus","train","truck","boat","traffic light",
    "fire hydrant","stop sign","parking meter","bench","bird","cat","dog","horse","sheep","cow",
    "elephant","bear","zebra","giraffe","backpack","umbrella","handbag","tie","suitcase","frisbee",
    "skis","snowboard","sports ball","kite","baseball bat","baseball glove","skateboard","surfboard","tennis racket","bottle",
    "wine glass","cup","fork","knife","spoon","bowl","banana","apple","sandwich","orange",
    "broccoli","carrot","hot dog","pizza","donut","cake","chair","sofa","pottedplant","bed",
    "diningtable","toilet","tvmonitor","laptop","mouse","remote","keyboard","cell phone","microwave","oven",
    "toaster","sink","refrigerator","book","clock","vase","scissors","teddy bear","hair drier","toothbrush"};


- (void)viewDidLoad {
    [super viewDidLoad];
    // Do any additional setup after loading the view, typically from a nib.
    
    self.videoCamera = [[CvVideoCamera alloc]initWithParentView:self.imageView];
    self.videoCamera.defaultAVCaptureDevicePosition = AVCaptureDevicePositionBack;

    self.videoCamera.defaultAVCaptureSessionPreset =AVCaptureSessionPreset640x480;
    
    self.videoCamera.defaultAVCaptureVideoOrientation = AVCaptureVideoOrientationPortrait;
    self.videoCamera.defaultFPS = 200;
    self.videoCamera.grayscaleMode = false;
    
    self.videoCamera.delegate = self;
    
    model_load();

    [self.videoCamera start];

}

static void model_load(){
    NSString *modelConfiguration_t = [[NSBundle mainBundle] pathForResource:@"coco_lite_trial6" ofType:@"cfg"];
    NSString *modelWeights_t = [[NSBundle mainBundle] pathForResource:@"coco_lite_trial6_653550" ofType:@"weights"];
    String modelConfiguration =  [modelConfiguration_t UTF8String];
    String modelWeights = [modelWeights_t UTF8String];
//    NSString *path = [[NSBundle mainBundle] pathForResource:@"yolov3" ofType:@"cfg"];

//    String modelConfiguration =  "/var/containers/Bundle/Application/F107EBBC-63C8-4787-8D0E-D263774AE049/fuck_ios.app/yolov3.cfg";
//    String modelWeights = "model/yolov3.weights";
//    Net net = readNetFromDarknet(modelConfiguration);
    
    net = readNetFromDarknet(modelConfiguration, modelWeights);
    net.setPreferableBackend(DNN_BACKEND_OPENCV);
    net.setPreferableTarget(DNN_TARGET_CPU);
}

//处理主程序
- (void)processImage:(Mat&)image{
    Mat blob;
    
    Mat image_t;
    cvtColor(image, image_t, cv::COLOR_RGBA2RGB, 3);
    
    blobFromImage(image_t, blob, 1/255.0, cvSize(inpWidth, inpHeight), Scalar(0,0,0), true, false);

    net.setInput(blob);
    vector<Mat> outs;

    net.forward(outs, getOutputsNames(net));
    postprocess(image_t, outs);
    vector<double> layersTimes;
    double freq = getTickFrequency() / 1000;
    double t = net.getPerfProfile(layersTimes) / freq;
    string label = format("Inference time for a frame : %.2f ms", t);
    putText(image_t, label, cv::Point(0, 15), FONT_HERSHEY_SIMPLEX, 0.5, Scalar(0, 0, 255));

    cvtColor(image_t, image, cv::COLOR_RGB2RGBA);
}

static void postprocess(Mat& frame, const vector<Mat>& outs)
{
    vector<int> classIds;
    vector<float> confidences;
    vector<cv::Rect> boxes;
    
    for (size_t i = 0; i < outs.size(); ++i)
    {
        // Scan through all the bounding boxes output from the network and keep only the
        // ones with high confidence scores. Assign the box's class label as the class
        // with the highest score for the box.
        float* data = (float*)outs[i].data;
        for (int j = 0; j < outs[i].rows; ++j, data += outs[i].cols)
        {
            Mat scores = outs[i].row(j).colRange(5, outs[i].cols);
            cv::Point classIdPoint;
            double confidence;
            // Get the value and location of the maximum score
            minMaxLoc(scores, 0, &confidence, 0, &classIdPoint);
            if (confidence > confThreshold)
            {
                int centerX = (int)(data[0] * frame.cols);
                int centerY = (int)(data[1] * frame.rows);
                int width = (int)(data[2] * frame.cols);
                int height = (int)(data[3] * frame.rows);
                int left = centerX - width / 2;
                int top = centerY - height / 2;
                
                classIds.push_back(classIdPoint.x);
                confidences.push_back((float)confidence);
                boxes.push_back(cv::Rect(left, top, width, height));
            }
        }
    }
    
    // Perform non maximum suppression to eliminate redundant overlapping boxes with
    // lower confidences
    vector<int> indices;
    NMSBoxes(boxes, confidences, confThreshold, nmsThreshold, indices);
    for (size_t i = 0; i < indices.size(); ++i)
    {
        int idx = indices[i];
        cv::Rect box = boxes[idx];
        drawPred(classIds[idx], confidences[idx], box.x, box.y,
                 box.x + box.width, box.y + box.height, frame);
    }
}
static void drawPred(int classId, float conf, int left, int top, int right, int bottom, Mat& frame)
{
    //Draw a rectangle displaying the bounding box
    rectangle(frame, cv::Point(left, top), cv::Point(right, bottom), cv::Scalar(255, 178, 50), 3);
    //Get the label for the class name and its confidence
    string label = format("%.2f", conf);
    //    if (!classes.empty())
    {
        CV_Assert(classId < (int)80);
        label = classes[classId] + ":" + label;
    }
    
    //Display the label at the top of the bounding box
    int baseLine;
    cv::Size labelSize = getTextSize(label, FONT_HERSHEY_SIMPLEX, 0.5, 1, &baseLine);
    top = max(top, labelSize.height);
    rectangle(frame, cv::Point(left, top - round(1.5*labelSize.height)), cv::Point(left + round(1.5*labelSize.width), top + baseLine), Scalar(255, 255, 255), FILLED);
    putText(frame, label, cv::Point(left, top), FONT_HERSHEY_SIMPLEX, 0.75, Scalar(0,0,0),1);
}
static vector<String> getOutputsNames(const Net& net)
{
    static vector<String> names;
    if (names.empty())
    {
        //Get the indices of the output layers, i.e. the layers with unconnected outputs
        vector<int> outLayers = net.getUnconnectedOutLayers();
        //get the names of all the layers in the network
        vector<String> layersNames = net.getLayerNames();
        // Get the names of the output layers in names
        names.resize(outLayers.size());
        for (size_t i = 0; i < outLayers.size(); ++i)
            names[i] = layersNames[outLayers[i] - 1];
    }
    return names;
}


- (void)didReceiveMemoryWarning {
    [super didReceiveMemoryWarning];
    // Dispose of any resources that can be recreated.
}

- (IBAction)turnlight:(UISwitch *)sender {
    Class captureDeviceClass = NSClassFromString(@"AVCaptureDevice");
    if (captureDeviceClass != nil) {
        AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];
        if ([device hasTorch] && [device hasFlash]){
            
            [device lockForConfiguration:nil];
            if(sender.isOn){
                [device setTorchMode:AVCaptureTorchModeOn];
                [device setFlashMode:AVCaptureFlashModeOn];
            } else {
                [device setTorchMode:AVCaptureTorchModeOff];
                [device setFlashMode:AVCaptureFlashModeOff];
            }
            [device unlockForConfiguration];
        }
    }
}

@end

最后的那部分我加了一个闪光灯按钮,代码网上找的。

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值