gvim 8.0.281.0
vim 进化版使用于windows,版本 8.0.281.0 ,亲测可用~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
caffe2-3rdparty.tar.gz
Caffe2 优秀的开元图像处理架构 第三方包,包括:
benchmark-4bf28e611b55de8a2d4eece3c335e014f8b0f630.zip
cnmem-28a182d49529da49f4ac4e3941cec3edf16b3540.zip
cub-01347a797c620618d09e7d2d90bce4be4c42513e.zip
eigen-ae9889a130bd0a9d3007f41d015563c2e8ac605f.zip
FP16-2e9eeeb0b463736d13b887d790ac7e72e78fa4bc.zip
FXdiv-8f85044fb41e560508cd69ed26c9afb9cc120e8a.zip
gloo-530878247b04c423fd35477208f68e70b8126e2d.zip
googletest-5e7fd50e17b6edf1cadff973d0ec68966cf3265e.zip
ios-cmake-e24081928d9ceec4f4adfcf12293f1e2a20eaedc.zip
nccl-2a974f5ca2aa12b178046b2206b43f1fd69d9fae.zip
nervanagpu-d4eefd50fbd7d34a17dddbc829888835d67b5f4a.zip
NNPACK-02bfa475d64040cd72b7c01daa9e862523ae87da.zip
protobuf-a428e42072765993ff674fda72863c9f1aa2d268.zip
psimd-0b26a3fb98dd6af7e1f4e0796c56df6b32b1c016.zip
pthreadpool-9e17903a3fc963fe86b151aaddae7cf1b1d34815.zip
pybind11-f38f359f96815421f1780c1a676715efd041f1ae.zip
xware-desktop_0.13.20160328_amd64.deb
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~`
ffmpeg-3.3.3
( ffmpeg-3.3.3.tar.bz2 解压后编译安装)
1.下载 ffmpeg-*.tar.gz
到 Ffmpeg 官网 https://ffmpeg.org/download.html 挑选你要升级到的版本,然后下载,比如作者下载的是 ffmpeg-2.0.tar.gz。
2.编译安装
(注:
如果执行./configure报yasm错 则下载 yasm-1.3.0.tar.gz
执行
tar zxvf yasm-1.3.0.tar.gz
cd yasm-1.3.0
./configure
make && make install)
tar -zxvf ffmpeg-2.0.tar.gz
cd ffmpeg-2.0
./configure --enable-shared --prefix=/usr/local/ffmpeg
make
make install
3.动态链接库
vi /etc/ld.so.conf
加入:/usr/local/ffmpeg/lib
执行
ldconfig
4.为 Ffmpeg 加入环境变量
vi /etc/profile
加入以下内容:
FFMPEG=/usr/local/ffmpeg
PATH加入:$FFMPEG/bin
5.使修改立即生效
source /etc/profile
执行
ffmpeg -version
打印结果
ffmpeg version 2.0
built on Jul 24 2013 09:59:06 with gcc 4.4.7 (GCC) 20120313 (Red Hat 4.4.7-3)
configuration: --enable-shared --prefix=/usr/local/ffmpeg
libavutil 52. 38.100 / 52. 38.100
libavcodec 55. 18.102 / 55. 18.102
libavformat 55. 12.100 / 55. 12.100
libavdevice 55. 3.100 / 55. 3.100
libavfilter 3. 79.101 / 3. 79.101
libswscale 2. 3.100 / 2. 3.100
libswresample 0. 17.102 / 0. 17.102
证明已升级成功。如果遇到 ffmpeg: error while loading shared libraries: libavdevice.so.55: cannot open shared object file: No such file or directory 之类的错误,请检查第三步是否做好。
imageio源码深度学习
imageio-2.2.0.tar.gz 解压后安装。./configure ,阅读readme。
sublime精简版
sublime Text 解压后可使用,适用windows
Cafffe Master
Caffe源码,携带SSD,Examples,demo.py图像集测试,也可从Github源码下载,https://github.com/weiliu89/caffe/tree/ssd
IMageNet图像集标注工具
IMageNet 图像集 标注工具 ,多边形标注,支持其他图像集标注。
caffe2源码
咖啡2 Caffe2 Github20170803
咖啡二第三方依赖集合下载Caffe2_thirdparty
咖啡二第三方依赖集合下载Caffe2_thirdparty
LabelImg源码
图像集 标注工具 LabelIMg ,sudo apt-get install pyqt4-dev-tools
sudo pipinstall labelimg
lableimg完成配置
Mnist图像数据集
再Caffe上使用 二进制文件 Mnist 图像 数据集
cifar10-binary-part4.tar.gz
cs.toronto cifar10 图像集 境外下载慢 共4部分,最后一部分。
cifar-10-binary-part2.
cs.toronto cifar10 图像集,共4部分
cifar10-binary.
从www.cs.toronto.edu下载的文件,境外文件下载慢,提供给大家,放在Caffe根目录,data/cifar10直接使用
c++11c++14综述
An-Overview-of-C++11-and-C++14
opencv_python-4.2.0.34-cp37-cp37m-win_amd64.whl
OpenCV的python版本。python3.8,支持所有x64位机器。使用匹配安装。windows推荐与python3.8配合使用。
opencv3.4includesLibs无需编译
解压到sln一级目录项目属性 C/C++ 附加包含目录 填写Libs/x86/opencv_v3.4.0/include路径 属性中链接器,所有选项附加库目录填写Libs/x86/opencv_v3.4.0/lib 附加依赖项:
opencv_aruco340.lib;f.lib;opencv_bgsegm340.lib;opencv_bgsegm340d.lib;opencv_bioinspired340.lib;opencv_bioinspired340d.lib;opencv_calib3d340.lib;opencv_calib3d340d.lib;opencv_ccalib340.lib;opencv_ccalib340d.lib;opencv_core340.lib;opencv_core340d.lib;opencv_datasets340.lib;opencv_datasets340d.lib;opencv_dnn340.lib;opencv_dnn340d.lib;opencv_dpm340.lib;opencv_dpm340d.lib;opencv_face340.lib;opencv_face340d.lib;opencv_features2d340.lib;opencv_features2d340d.lib;opencv_flann340.lib;opencv_flann340d.lib;opencv_fuzzy340.lib;opencv_fuzzy340d.lib;opencv_highgui340.lib;opencv_highgui340d.lib;opencv_imgcodecs340.lib;opencv_imgcodecs340d.lib;opencv_imgproc340.lib;opencv_imgproc340d.lib;opencv_img_hash340.lib;opencv_img_hash340d.lib;opencv_line_descriptor340.lib;opencv_line_descriptor340d.lib;opencv_ml340.lib;opencv_ml340d.lib;opencv_objdetect340.lib;opencv_objdetect340d.lib;opencv_optflow340.lib;opencv_optflow340d.lib;opencv_phase_unwrapping340.lib;opencv_phase_unwrapping340d.lib;opencv_photo340.lib;opencv_photo340d.lib;opencv_plot340.lib;opencv_plot340d.lib;opencv_reg340.lib;opencv_reg340d.lib;opencv_rgbd340.lib;opencv_rgbd340d.lib;opencv_saliency340.lib;opencv_saliency340d.lib;opencv_shape340.lib;opencv_shape340d.lib;opencv_stereo340.lib;opencv_stereo340d.lib;opencv_stitching340.lib;opencv_stitching340d.lib;opencv_structured_light340.lib;opencv_structured_light340d.lib;opencv_superres340.lib;opencv_superres340d.lib;opencv_surface_matching340.lib;opencv_surface_matching340d.lib;opencv_text340.lib;opencv_text340d.lib;opencv_tracking340.lib;opencv_tracking340d.lib;opencv_video340.lib;opencv_video340d.lib;opencv_videoio340.lib;opencv_videoio340d.lib;opencv_videostab340.lib;opencv_videostab340d.lib;opencv_xfeatures2d340.lib;opencv_xfeatures2d340d.lib;opencv_ximgproc340.lib;opencv_ximgproc340d.lib;opencv_xobjdetect340.lib;opencv_xobjdetect340d.lib;opencv_xphoto340.lib;opencv_xphoto340d.lib;
tensorflow-2.2.0rc3-cp38-cp38-win_amd64.whl
tensorflow2.2.。使用pip安装。遇到无法下载的资源可以使用拷贝链接。再使用下载工具进行下载。本资源是官方正版,可到官方资源去下载。https://github.com/tensorflow/tensorflow/releases。
vehicle-speed-check-master.zip
一个监测车辆跟踪的例子
import cv2
import numpy as np
import math
import matplotlib.pyplot as plt
class Hog_descriptor():
def __init__(self, img, cell_size=16, bin_size=8):
self.img = img
self.img = np.sqrt(img / float(np.max(img)))
self.img = self.img * 255
self.cell_size = cell_size
self.bin_size = bin_size
self.angle_unit = 360 / self.bin_size
assert type(self.bin_size) == int, "bin_size should be integer,"
assert type(self.cell_size) == int, "cell_size should be integer,"
assert type(self.angle_unit) == int, "bin_size should be divisible by 360"
def extract(self):
height, width = self.img.shape
gradient_magnitude, gradient_angle = self.global_gradient()
gradient_magnitude = abs(gradient_magnitude)
cell_gradient_vector = np.zeros((height / self.cell_size, width / self.cell_size, self.bin_size))
for i in range(cell_gradient_vector.shape[0]):
for j in range(cell_gradient_vector.shape[1]):
cell_magnitude = gradient_magnitude[i * self.cell_size:(i + 1) * self.cell_size,
j * self.cell_size:(j + 1) * self.cell_size]
cell_angle = gradient_angle[i * self.cell_size:(i + 1) * self.cell_size,
j * self.cell_size:(j + 1) * self.cell_size]
cell_gradient_vector[i][j] = self.cell_gradient(cell_magnitude, cell_angle)
hog_image = self.render_gradient(np.zeros([height, width]), cell_gradient_vector)
hog_vector = []
for i in range(cell_gradient_vector.shape[0] - 1):
for j in range(cell_gradient_vector.shape[1] - 1):
block_vector = []
block_vector.extend(cell_gradient_vector[i][j])
block_vector.extend(cell_gradient_vector[i][j + 1])
block_vector.extend(cell_gradient_vector[i + 1][j])
block_vector.extend(cell_gradient_vector[i + 1][j + 1])
mag = lambda vector: math.sqrt(sum(i ** 2 for i in vector))
magnitude = mag(block_vector)
if magnitude != 0:
normalize = lambda block_vector, magnitude: [element / magnitude for element in block_vector]
block_vector = normalize(block_vector, magnitude)
hog_vector.append(block_vector)
return hog_vector, hog_image
def global_gradient(self):
gradient_values_x = cv2.Sobel(self.img, cv2.CV_64F, 1, 0, ksize=5)
gradient_values_y = cv2.Sobel(self.img, cv2.CV_64F, 0, 1, ksize=5)
gradient_magnitude = cv2.addWeighted(gradient_values_x, 0.5, gradient_values_y, 0.5, 0)
gradient_angle = cv2.phase(gradient_values_x, gradient_values_y, angleInDegrees=True)
return gradient_magnitude, gradient_angle
def cell_gradient(self, cell_magnitude, cell_angle):
orientation_centers = [0] * self.bin_size
for i in range(cell_magnitude.shape[0]):
for j in range(cell_magnitude.shape[1]):
gradient_strength = cell_magnitude[i][j]
gradient_angle = cell_angle[i][j]
min_angle, max_angle, mod = self.get_closest_bins(gradient_angle)
orientation_centers[min_angle] += (gradient_strength * (1 - (mod / self.angle_unit)))
orientation_centers[max_angle] += (gradient_strength * (mod / self.angle_unit))
return orientation_centers
def get_closest_bins(self, gradient_angle):
idx = int(gradient_angle / self.angle_unit)
mod = gradient_angle % self.angle_unit
if idx == self.bin_size:
return idx - 1, (idx) % self.bin_size, mod
return idx, (idx + 1) % self.bin_size, mod
def render_gradient(self, image, cell_gradient):
cell_width = self.cell_size / 2
max_mag = np.array(cell_gradient).max()
for x in range(cell_gradient.shape[0]):
for y in range(cell_gradient.shape[1]):
cell_grad = cell_gradient[x][y]
cell_grad /= max_mag
angle = 0
angle_gap = self.angle_unit
for magnitude in cell_grad:
angle_radian = math.radians(angle)
x1 = int(x * self.cell_size + magnitude * cell_width * math.cos(angle_radian))
y1 = int(y * self.cell_size + magnitude * cell_width * math.sin(angle_radian))
x2 = int(x * self.cell_size - magnitude * cell_width * math.cos(angle_radian))
y2 = int(y * self.cell_size - magnitude * cell_width * math.sin(angle_radian))
cv2.line(image, (y1, x1), (y2, x2), int(255 * math.sqrt(magnitude)))
angle += angle_gap
return image
img = cv2.imread('G://1.bmp', cv2.IMREAD_GRAYSCALE)
hog = Hog_descriptor(img, cell_size=8, bin_size=8)
vector, image = hog.extract()
plt.imshow(image, cmap=plt.cm.gray)
plt.show()
CxLibSVM.rar
SVM的libsvm库。封装了libsvm,非常好的机器学习入门示例。可以用来做字符识别、是一个很好的模式识别例子。
字符模板字母、数字字符
字符模板,字符识别。字母A~Z,数字0~9的模板。OCR所需的字符模板。总共6393各字符。测试字符三百多个。
opencv4.2.0无需编译
OPENCV4.2.0包括了non-free,可以做特征相关的算法。以下是程序。
#include "opencv2/opencv.hpp"
void main()
{
cv::Mat lenaImg = cv::imread("G:\\1.bmp");
cv::Mat lenaImg2 = cv::imread("G:\\2.bmp");
auto akaze_detector = AKAZE::create();
vector<KeyPoint> kpts_01, kpts_02;
Mat descriptors1, descriptors2;
akaze_detector->detectAndCompute(lenaImg, Mat(), kpts_01, descriptors1);
akaze_detector->detectAndCompute(lenaImg2, Mat(), kpts_02, descriptors2);
// 定义描述子匹配 - 暴力匹配
Ptr<DescriptorMatcher> matcher = DescriptorMatcher::create(DescriptorMatcher::BRUTEFORCE);
std::vector< DMatch > matches;
matcher->match(descriptors1, descriptors2, matches);
// 绘制匹配
Mat img_matches;
drawMatches(lenaImg, kpts_01, lenaImg2, kpts_02, matches, img_matches);
imshow("AKAZE-Matches", img_matches);
imwrite("D://result.png", img_matches);
waitKey(0);
}
Mininst格式数据制作。
制作Mininst格式的数据,研究sklearn库等机器学习算法与库用得到。C++的VS2017测试通过。适用于车牌和数字识别。下面是Skearn测试程序。
#导入必备的包
import numpy as np
import struct
import matplotlib.pyplot as plt
import os
##加载svm模型
from sklearn import svm
###用于做数据预处理
from sklearn import preprocessing
import time
#加载数据的路径
path='.'
def load_mnist_train(path, kind='train'):
labels_path = os.path.join(path,'%s-labels-idx1-ubyte'% kind)
images_path = os.path.join(path,'%s-images-idx3-ubyte'% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',lbpath.read(8))
labels = np.fromfile(lbpath,dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack('>IIII',imgpath.read(16))
images = np.fromfile(imgpath,dtype=np.uint8).reshape(len(labels), 784)
return images, labels
def load_mnist_test(path, kind='t10k'):
labels_path = os.path.join(path,'%s-labels-idx1-ubyte'% kind)
images_path = os.path.join(path,'%s-images-idx3-ubyte'% kind)
with open(labels_path, 'rb') as lbpath:
magic, n = struct.unpack('>II',lbpath.read(8))
labels = np.fromfile(lbpath,dtype=np.uint8)
with open(images_path, 'rb') as imgpath:
magic, num, rows, cols = struct.unpack('>IIII',imgpath.read(16))
images = np.fromfile(imgpath,dtype=np.uint8).reshape(len(labels), 784)
return images, labels
train_images,train_labels=load_mnist_train(path)
test_images,test_labels=load_mnist_test(path)
X=preprocessing.StandardScaler().fit_transform(train_images)
X_train=X[0:60000]
y_train=train_labels[0:60000]
print(time.strftime('%Y-%m-%d %H:%M:%S'))
model_svc = svm.LinearSVC()
#model_svc = svm.SVC()
model_svc.fit(X_train,y_train)
print(time.strftime('%Y-%m-%d %H:%M:%S'))
##显示前30个样本的真实标签和预测值,用图显示
x=preprocessing.StandardScaler().fit_transform(test_images)
x_test=x[0:10000]
y_pred=test_labels[0:10000]
print(model_svc.score(x_test,y_pred))
y=model_svc.predict(x)
fig1=plt.figure(figsize=(8,8))
fig1.subplots_adjust(left=0,right=1,bottom=0,top=1,hspace=0.05,wspace=0.05)
for i in range(100):
ax=fig1.add_subplot(10,10,i+1,xticks=[],yticks=[])
ax.imshow(np.reshape(test_images[i], [28,28]),cmap=plt.cm.binary,interpolation='nearest')
ax.text(0,2,"pred:"+str(y[i]),color='red')
#ax.text(0,32,"real:"+str(test_labels[i]),color='blue')
plt.show()
TensorflowBaseDemo-master.zip
字符识别。安装tensorflow。
conda create -n cpu_avx2 python=3.7
The following packages will be downloaded:
package | build
---------------------------|-----------------
certifi-2020.4.5.1 | py37_0 156 KB
openssl-1.1.1g | he774522_0 4.8 MB
python-3.7.7 | h60c2a47_2 18.3 MB
setuptools-46.1.3 | py37_0 528 KB
sqlite-3.31.1 | h2a8f88b_1 1.3 MB
zlib-1.2.11 | h62dcd97_4 132 KB
------------------------------------------------------------
Total: 25.2 MB
The following NEW packages will be INSTALLED:
ca-certificates pkgs/main/win-64::ca-certificates-2020.1.1-0
certifi pkgs/main/win-64::certifi-2020.4.5.1-py37_0
openssl pkgs/main/win-64::openssl-1.1.1g-he774522_0
pip pkgs/main/win-64::pip-20.0.2-py37_1
python pkgs/main/win-64::python-3.7.7-h60c2a47_2
setuptools pkgs/main/win-64::setuptools-46.1.3-py37_0
sqlite pkgs/main/win-64::sqlite-3.31.1-h2a8f88b_1
vc pkgs/main/win-64::vc-14.1-h0510ff6_4
vs2015_runtime pkgs/main/win-64::vs2015_runtime-14.16.27012-hf0eaf9b_1
wheel pkgs/main/win-64::wheel-0.34.2-py37_0
wincertstore pkgs/main/win-64::wincertstore-0.2-py37_0
zlib pkgs/main/win-64::zlib-1.2.11-h62dcd97_4
Proceed ([y]/n)? y
Downloading and Extracting Packages
setuptools-46.1.3 | 528 KB | ############################################################################ | 100%
python-3.7.7 | 18.3 MB | ############################################################################ | 100%
certifi-2020.4.5.1 | 156 KB | ############################################################################ | 100%
zlib-1.2.11 | 132 KB | ###################
minist转为为图像的代码。含有Minist文件数据。
Minist原始数据,手写数字训练数据,含有60000个训练样本和10000测试数据。资源下载所需积分0 。或者去百度网盘免费下载。链接: https://pan.baidu.com/s/1YfjdwjqKromieGyZTprd7Q 提取码: q9ps
模仿mnist数据集制作自己的数据集代码
模仿mnist数据集制作自己的数据集代码。HOG+SVM,深度学习,识别中采用的数据集。可以适用多个平台,声明本代码在OPENCV3.3和VS2017测试通过。
hdf5-1.8源码
$ gunzip < hdf5-X.Y.Z.tar.gz | tar xf - #解压缩
$ cd hdf5-X.Y.Z
$ ./configure --prefix=/usr/local/hdf5 #安装路径
$ make
$ make check # run test suite.
$ make install
$ make check-install # verify installation.
最后安装,支持h5编译。
sudo apt-get install libhdf5-serial-dev
VSCode 4Linux
VS,Linux Version,Extension:Clang,CMake,C/C++ IntelliSense.
ORB-SLAM2源码
三维重建。跟踪。ORB SLAM2 源码。
tensorflow依赖集合
tensorflow 1.4版本 依赖库 python
/home/suanfa/data/tf_depends/funcsigs-1.0.2.tar.gz
/home/suanfa/data/tf_depends/funcsigs-1.0.2-py2.py3-none-any.whl
/home/suanfa/data/tf_depends/mock-2.0.0.tar.gz
/home/suanfa/data/tf_depends/mock-2.0.0-py2.py3-none-any.whl
/home/suanfa/data/tf_depends/numpy-1.11.2.tar.gz
/home/suanfa/data/tf_depends/pbr-3.1.1.tar.gz
/home/suanfa/data/tf_depends/pbr-3.1.1-py2.py3-none-any.whl
/home/suanfa/data/tf_depends/protobuf-3.1.0.post1.tar.gz
/home/suanfa/data/tf_depends/protobuf-3.1.0.post1-py2.py3-none-any.whl
/home/suanfa/data/tf_depends/setuptools-38.2.4.rar
/home/suanfa/data/tf_depends/six-1.11.0.tar.gz
/home/suanfa/data/tf_depends/six-1.11.0-py2.py3-none-any.whl
/home/suanfa/data/tf_depends/tf_depends.rar
VOT2014测试集合
VOT2014竞赛测试集,Tracking测试集,共25个序列。
下载地址见附件。truck方法有很多,粒子滤波,频域方法。
VOC图像集(人工智能)
VOC测试集和训练集,将下载的所有3个文件解压到一个文件夹。
下载地址见附件。VOC测试集和训练集,将下载的所有3个文件解压到一个文件夹。
学习跟踪可以从struck,频域方法研究。
eigen3源码
caffe中提供了c++的接口,所以在c++矩阵对矩阵的处理是不可避免的,所以这里使用了eigen库来实现c++对矩阵、向量等的快速处理。
eigen是开源、并且不用编译的库,主要原因是它提供的实现都是模板,所以不能使用编译好的链接库。
下面介绍Ubuntu下的相关配置:
1、安装
该部分主要参照eigen3下载后的安装文档:
1)在INSTALL文件所在的文件路径新建一个文件夹如build_dir(m)
2)进入build_dir(cd build_dir)
3)cmake ..
4)make
5)sudo make install
安装后头文件在:
usr/local/include/eigen3
( pip-9.0.1-py2.py3-none-any.whl
进入下载文件夹,使用pip install pip-9.0.1-py2.py3-none-any.whl 安装
OpenCV-2.4.2.tar.bz2
1)在安装OpenCV前需要安装的软件包有
GCC4.4.X or later,可通过命令sudo apt-get install build-essential安装
CMake2.6 or later
SVN客户端
GTK+2.Xor higher, including headers(libgtk2.0-dev)
pkgconfig
Python2.6 or later and Numpy 1.5 or later with developerpackages(python-dev, python-numpy)
ffmpegor libav development packages: libavcodec-dev, libavformat-dev,libswsacle-dev
[可选]libdc13942.x
[可选]libjpeg-dev,libpng-dev, libtiff-dev, libjasper-dev
所有的软件包都可在终端安装或者通过Synaptic软件管理器。
终端安装依赖项:
sudo apt-get install build-essential libgtk2.0-dev libjpeg-dev libtiff4-dev libjasper-dev libopenexr-dev cmake python-dev python-numpy python-tk libtbb-dev libeigen2-dev yasm libfaac-dev libopencore-amrnb-dev libopencore-amrwb-dev libtheora-dev libvorbis-dev libxvidcore-dev libx264-dev libqt4-dev libqt4-opengl-dev sphinx-common texlive-latex-extra libv4l-dev libdc1394-22-dev libavcodec-dev libavformat-dev libswscale-dev
2)下载最新版OpenCV
打开网页:http://sourceforge.net/projects/opencvlibrary
下载安装包OpenCV-2.4.2.tar.bz2
sudo tar jxvf OpenCV-2.4.2.tar.bz2 -C /usr/local/
cd /usr/local/
sudo mv OpenCV-2.4.2 opencv
cd opencv
mkdir release
cd release
cmake -D WITH_TBB=ON -D BUILD_NEW_PYTHON_SUPPORT=ON -D WITH_V4L=ON -DINSTALL_C_EXAMPLES=ON -D INSTALL_PYTHON_EXAMPLES=ON -DBUILD_EXAMPLES=ON -D WITH_QT=ON -D WITH_OPENGL=ON ..
make
sudo make install
3)安装后的配置
添加库的路径
sudo gedit /etc/ld.so.conf.d/opencv.conf
添加内容
/usr/local/lib
在终端输入命令
sudo ldconfig
设置环境变量
sudo gedit /etc/bash.bashrc
在文件最后加入以下两行并保存:
PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/local/lib/pkgconfig
export PKG_CONFIG_PATH
此时重启Ubuntu或重新登录账户,使得OpenCV安装生效。
4)测试OpenCV自带例程
编译程序:
cd /usr/local/opencv/samples/c
chmod +x build_all.sh
./build_all.sh
运行程序:
./facedetect --cascade="/usr/local/share/OpenCV/haarcascades/haarcascade_frontalface_alt.xml" --scale=1.5 lena.jpg
运行结果可以看到脸部有圆框