一、开发环境配置
(1)ubuntu16.04 +ROS kinetic安装
(2)Ubuntu16.04+ROS kinetic +Basler_camera环境配置以及相机标定
二、darknet_ros下载与环境配置
1.代码下载
代码Github主页:https://github.com/leggedrobotics/darknet_ros
下载命令:
mkdir -p darknet_ros_yolov3/src
cd darknet_ros_yolov3/src
git clone --recursive git@github.com:leggedrobotics/darknet_ros.git
2 . 编译
(1)修改darknet文件夹中的Makefile,改成GPU适应(GPU,OPENCV,CUDNN,如果没有跑CPU也行),修改后的Makefile如下:
GPU=1
CUDNN=1
OPENCV=1
OPENMP=1
DEBUG=0
ARCH= -gencode arch=compute_30,code=sm_30 \
-gencode arch=compute_35,code=sm_35 \
-gencode arch=compute_50,code=[sm_50,compute_50] \
-gencode arch=compute_52,code=[sm_52,compute_52] \
-gencode arch=compute_61,code=[sm_61,compute_61]
# This is what I use, uncomment if you know your arch and want to specify
# ARCH= -gencode arch=compute_52,code=compute_52
VPATH=./src/:./examples
SLIB=libdarknet.so
ALIB=libdarknet.a
EXEC=darknet
OBJDIR=./obj/
CC=gcc
NVCC=nvcc
AR=ar
ARFLAGS=rcs
OPTS=-Ofast
LDFLAGS= -lm -pthread
COMMON= -Iinclude/ -Isrc/
CFLAGS=-Wall -Wno-unused-result -Wno-unknown-pragmas -Wfatal-errors -fPIC
ifeq ($(OPENMP), 1)
CFLAGS+= -fopenmp
endif
ifeq ($(DEBUG), 1)
OPTS=-O0 -g
endif
CFLAGS+=$(OPTS)
ifeq ($(OPENCV), 1)
COMMON+= -DOPENCV
CFLAGS+= -DOPENCV
LDFLAGS+= `pkg-config --libs opencv`
COMMON+= `pkg-config --cflags opencv`
endif
ifeq ($(GPU), 1)
COMMON+= -DGPU -I/usr/local/cuda/include/
CFLAGS+= -DGPU
LDFLAGS+= -L/usr/local/cuda/lib64 -lcuda -lcudart -lcublas -lcurand
endif
ifeq ($(CUDNN), 1)
COMMON+= -DCUDNN
CFLAGS+= -DCUDNN
LDFLAGS+= -lcudnn
endif
OBJ=gemm.o utils.o cuda.o deconvolutional_layer.o convolutional_layer.o list.o image.o activations.o im2col.o col2im.o blas.o crop_layer.o dropout_layer.o maxpool_layer.o softmax_layer.o data.o matrix.o network.o connected_layer.o cost_layer.o parser.o option_list.o detection_layer.o route_layer.o upsample_layer.o box.o normalization_layer.o avgpool_layer.o layer.o local_layer.o shortcut_layer.o logistic_layer.o activation_layer.o rnn_layer.o gru_layer.o crnn_layer.o demo.o batchnorm_layer.o region_layer.o reorg_layer.o tree.o lstm_layer.o l2norm_layer.o yolo_layer.o
EXECOBJA=captcha.o lsd.o super.o art.o tag.o cifar.o go.o rnn.o segmenter.o regressor.o classifier.o coco.o yolo.o detector.o nightmare.o darknet.o
ifeq ($(GPU), 1)
LDFLAGS+= -lstdc++
OBJ+=convolutional_kernels.o deconvolutional_kernels.o activation_kernels.o im2col_kernels.o col2im_kernels.o blas_kernels.o crop_layer_kernels.o dropout_layer_kernels.o maxpool_layer_kernels.o avgpool_layer_kernels.o
endif
EXECOBJ = $(addprefix $(OBJDIR), $(EXECOBJA))
OBJS = $(addprefix $(OBJDIR), $(OBJ))
DEPS = $(wildcard src/*.h) Makefile include/darknet.h
#all: obj backup results $(SLIB) $(ALIB) $(EXEC)
all: obj results $(SLIB) $(ALIB) $(EXEC)
$(EXEC): $(EXECOBJ) $(ALIB)
$(CC) $(COMMON) $(CFLAGS) $^ -o $@ $(LDFLAGS) $(ALIB)
$(ALIB): $(OBJS)
$(AR) $(ARFLAGS) $@ $^
$(SLIB): $(OBJS)
$(CC) $(CFLAGS) -shared $^ -o $@ $(LDFLAGS)
$(OBJDIR)%.o: %.c $(DEPS)
$(CC) $(COMMON) $(CFLAGS) -c $< -o $@
$(OBJDIR)%.o: %.cu $(DEPS)
$(NVCC) $(ARCH) $(COMMON) --compiler-options "$(CFLAGS)" -c $< -o $@
obj:
mkdir -p obj
backup:
mkdir -p backup
results:
mkdir -p results
.PHONY: clean
clean:
rm -rf $(OBJS) $(SLIB) $(ALIB) $(EXEC) $(EXECOBJ) $(OBJDIR)/*
需要注意的是第七行,显卡计算能力需要参考英伟达网址上对应本机显卡型号的信息进行更改,添加到最后一行即可,本人的GPU为GTX 1080 -gencode arch=compute_61,code=[sm_61,compute_61]
修改完成后编译
cd darknet_ros_yolov3/darknet
make
(2)编译darknet_ros
catkin_make -DCMAKE_BUILD_TYPE=Release
此时会开始编译整个项目,编译完成后会检查{catkin_ws}/darknet_ros/darknet_ros/yolo_network_config/weights文件下有没有yolov2-tiny.weights和yolov3.weights两个模型文件,默认下载好的代码里面为了节省体积是不带这两个模型文件的。因此编译之后会自动开始下载模型文件,此时又是一段漫长的等待时间。
如果刚好你之前已经下载好了模型文件,那就好了,在开始编译之前就把模型文件拷贝到上述文件夹下,就不会再次下载了。
3.detecting_test 图片检测测试
你可以修改或者利用这里的话题,进行自己相应的开发。例如,在当初运行darknet_ros的时候,我想从摄像头读取图片,就应该将camera_reading的topic改为/pylon_camera_node/image_raw
doc 存放的是用来检测的图片
include 存放的包含的头文件
launch 存放launch文件,有3个launch文件,可以通过roslaunch命令运行
src 源程序,主要是darknet与ros的接口实现
test 用来测试的程序
yolo_network_config 网络配置文件,其中的cfg文件夹中存放了各种网络设置的超参数,以及网络的结构,weights文件夹中存放的是已经训练好的网络模型,想要运行程序必须先把网络模型下载好
运行detecting_test 网上的教程一般是没有说明这一块文件是怎么运行的。具体操作如下
为了运行这个package,修改了其中的源代码
image_publisher.cpp
/**************************************************************************
* @author z.h
* @date 2019.1.4
* @usage:image_publisher.cpp
* @brief:@params[IN]:@return:
**************************************************************************/
#include <ros/ros.h>
#include <opencv2/opencv.hpp>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
int main(int argc, char *argv[])
{
ros::init(argc, argv, "image_publisher");
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Publisher pub = it.advertise("/camere/image", 1);
cv::Mat image = cv::imread("/home/lufeng/darknet_ros_yolov3/src/detecting_test/data/2.jpg", CV_LOAD_IMAGE_COLOR);
sensor_msgs::ImagePtr msg = cv_bridge::CvImage(std_msgs::Header(), "bgr8",image).toImageMsg();
ros::Rate loop_rate(5);
while (nh.ok()) {
pub.publish(msg);
ros::spinOnce();
loop_rate.sleep();
}
return 0;
}
image_subscriber.cpp
/**************************************************************************
* @author z.h
* @date 2019.1.4
* @usage:image_subscribe.cpp
* @brief:@params[IN]:@return:
**************************************************************************/
#include <ros/ros.h>
#include <opencv2/opencv.hpp>
#include <image_transport/image_transport.h>
#include <cv_bridge/cv_bridge.h>
void imageCallback(const sensor_msgs::ImageConstPtr& msg)
{
try
{
cv::imshow("view", cv_bridge::toCvShare(msg, "bgr8")->image);
cv::imwrite("/home/lufeng/darknet_ros_yolov3/src/detecting_test/data/output.jpg", cv_bridge::toCvShare(msg, "bgr8")->image);
cv::waitKey(10);
}
catch (cv_bridge::Exception& e)
{
ROS_ERROR("Could not convert from '%s' to 'bgr8'.", msg->encoding.c_str());
}
}
int main(int argc, char **argv)
{
ros::init(argc, argv, "image_subscriber");
ros::NodeHandle nh;
cv::namedWindow("view");
//cv::startWindowThread();
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub = it.subscribe("/darknet_ros/detection_image", 1, imageCallback);///darknet_ros/detection_image
ros::spin();
cv::destroyWindow("view");
return 0;
}
修改说明
在image_publisher.cpp中,修改了话题的发布名称,和ros.yaml中的camera_reading的topic要一致,才能调用darknet进行检测。检测的图片路径根据自己的路径修改好
image_transport::Publisher pub = it.advertise("/camere/image", 1);
cv::Mat image = cv::imread("/home/xxxx/darknet_ros_yolov3/src/detecting_test/data/2.jpg", CV_LOAD_IMAGE_COLOR);
在image_subscriber.cpp,修改了检测图片的存储地址
cv::imwrite("/home/xxxx/darknet_ros_yolov3/src/detecting_test/data/output.jpg", cv_bridge::toCvShare(msg, "bgr8")->image);
注意订阅的话题名称要和ros.yaml中的publisher的detection_image的topic要一致,得到darknet的检测结果并进行存储。
运行:
roscore
rosrun detecting_image image_publisher
roslaunch darknet_ros darknet_ros.launch
rosrun detecting_image image_subscriber
即,我通过detecting_test的image_publisher发布图像话题“/camere/image”,通过darknet_ros中的darknet_ros接收到了发布的图像并进行了检测,然后发布了另一个话题“/darknet_ros/detection_image”,然后detecting_image中的image_subscriber收到了这个话题,保存好了检测结果图片。
最终的检测结果如下图所示:
4.外接basler 工业相机实时检测
注:Basler camera 的驱动安装和节点发布,请看文章开头链接!!!!
执行darknet_ros进行检测,在运行检测之前需要更改一下配置文件,使得darknet_ros订阅的话题与pylon_camera_node发布的图片话题对应。
打开darknet_ros/config/ros.yaml文件,修改:
subscribers:
camera_reading:
topic: /camera/rgb/image_raw
queue_size: 1
为
subscribers:
camera_reading:
topic: /pylon_camera_node/image_raw
queue_size: 1
回到工作空间根目录,执行:
source devel/setup.bash
roslaunch darknet_ros darknet_ros.launch
运行摄像头节点:
roslaunch pylon_camera pylon_camera_node.launch
实时检测结果:
视频