斯坦福大学GitHub开源视频检索项目(Video Search) 环境搭建记录

项目链接 :https://github.com/andrefaraujo/videosearch


此篇文章用于记录我搭建环境的过程,以及在搭建环境时相应的解决方案,为后续学习提供帮助。

原文翻译如下:

这个项目目前包含的代码:

  • 从视频中提取关键帧
  • 视频镜头边界探测器
  • 筛选描述符提取/图像/帧
  • 全局描述符提取(费舍尔向量)/图像/框架,被枪击或场景
  • 布隆过滤器索引/场景(视频) 在这里
  • 在图像或视频检索数据库使用图像查询
  • 根据平均精度和精度评价检索结果1

有了这些,你可以复制下面提到的论文的主要结果,在下一节中概述的步骤。

这个库还可以有用如果感兴趣搜索图像使用查询图像数据库。 在 这种情况下,可以直接使用下面描述的框架技术。

我们已经测试了在Linux上实现(Ubuntu)和Mac OS X。

对于任何问题或问题,随时联系。

 

快速搭建所需要的步骤:

这里说明了使用该库的代码通过一个简单的例子包含运行 4数据库视频剪辑和两个图像查询。 这也作为一种方式,以确保您的代码是有效的 正常。

环境:

 如果你是新装的Linux系统,那首先你肯定安装:g++环境,python环境,pip,等等......

 

1. Linux 系统。(我用的Ubuntu18)

2.在系统上安装opencv(我安装的是opencv4)

推荐这个安装教程: ubuntu 安装 opencv 教程

但是如果你安装出了问题,百度半天没用,推荐进入官网看官网的安装介绍法:

Ubuntu 安装 opencv (opencv官网安装法)

3.在系统上安装 ffmpeg (我是直接用的安装命令安装的,没有用源码安装)

ffmpeg安装教程

4.在系统上安装pkg-config(这个貌似不用安装,反正我的电脑一开始就有)

 

开始运行命令:

Step 1: Clone repository (where mypath is the path you'll download the repository to): 

步骤1:下载源文件(这里的 "my path " 指的是你下载好的文件所放置的位置,比如我放在桌面,那位置就是:"/home/你的用户名 / 桌面 / videosearch ")

cd $mypath
git clone https://github.com/andrefaraujo/videosearch.git

用git命令下载太慢了,推荐直接在网页下载再复制过来。

 

Step 2: Building VLFEAT library:

步骤2:搭建VLFEAT库

cd $mypath/videosearch/common/vlfeat-0.9.18/
make

Step 3: Building YAEL library:

步骤3:搭建YAEL库:

cd $mypath/videosearch/common/yael_v260_modif/
./configure.sh
cd yael
make

Step 4: Extract keyframes from test database videos:

cd $mypath/videosearch/indexer/keyframes
./run_keyframe_extraction_test.sh

Step 5: Build shot boundary detector and extract shot boundaries for test database videos:

cd $mypath/videosearch/indexer/shot_detector
make
./run_shot_detector_test.sh

Step 6: Build SIFT extractor and extract SIFT for each keyframe in database:

cd $mypath/videosearch/indexer/local_descriptors/
make
./run_sift_extraction_test.sh

Step 7: Build global descriptor extractors and extract global descriptors per frame, shot and scene:

cd $mypath/videosearch/indexer/global_descriptors/
make
    
# Extract frame-based global descriptors (GD)
./run_frame_based_index_test.sh # extract GDs for each clip
./run_join_frame_based_index_test.sh # join all GDs in one index
    
# Extract shot-based global descriptors (GD) with mode LOC
./run_shot_based_index_mode_1_test.sh # extract GDs for each clip
./run_join_shot_based_index_mode_1_test.sh # join all GDs in one index
./run_process_shot_files_mode_1_test.sh # process auxiliary shot files for this mode

# Extract shot-based global descriptors (GD) with mode INDEP
./run_shot_based_index_mode_0_test.sh # extract GDs for each clip
./run_join_shot_based_index_mode_0_test.sh # join all GDs in one index
./run_process_shot_files_mode_0_test.sh # process auxiliary shot files for this mode
    
# Extract scene-based global descriptors (GD)
./run_scene_based_index_test.sh # extract GD for each clip
./run_join_scene_based_index_test.sh # join all GDs in one index
./run_process_scene_files_test.sh # process auxiliary scene files
./run_process_scene_rerank_files_test.sh # process auxiliary file for scene reranking

Step 8: Extract local descriptors (and optionally global descriptors) for query images (you need to do this before running retriever, which is the next step):

cd $mypath/videosearch/indexer/local_descriptors/
./run_sift_extraction_test_query.sh
# Optional: extract global descriptors
cd $mypath/videosearch/indexer/global_descriptors/
./run_query_index_test.sh

Step 9: Build retriever and run it for frame-, shot- and scene-based indexes:

cd $mypath/videosearch/retriever/
make

# Retrieve using frame-based global descriptors
./run_frame_test.sh

# Optional: Retrieve using frame-based global descriptors, using pre-computed query global descriptors
./run_frame_test_with_query_index.sh

# Retrieve using shot-based global descriptors, mode LOC
./run_shot_mode_1_test.sh

# Retrieve using shot-based global descriptors, mode INDEP
./run_shot_mode_0_test.sh

# Retrieve using scene-based global descriptors in first stage,
# then shot-based global descriptors in second stage
./run_scene_test.sh

Step 10: Evaluate retrieval results (calculate AP and p@1):

cd $mypath/videosearch/scoring/

# Evaluate frame-based results
./run_convert_frame_based_results_test.sh # converting results to scoreable format
./run_evaluate_frame_based_test.sh # calculating AP and p@1

# Optional: Evaluate frame-based results which used pre-computed query global descriptors
./run_convert_frame_based_results_test_query_index.sh # converting results to scoreable format
./run_evaluate_frame_based_test_query_index.sh # calculating AP and p@1

# Evaluate shot-based results, mode LOC
./run_convert_shot_based_mode_1_results_test.sh # converting results to scoreable format
./run_evaluate_shot_based_mode_1_test.sh # calculating AP and p@1

# Evaluate shot-based results, mode INDEP
./run_convert_shot_based_mode_0_results_test.sh # converting results to scoreable format
./run_evaluate_shot_based_mode_0_test.sh # calculating AP and p@1

# Evaluate scene-based results
./run_convert_scene_based_results_test.sh # converting results to scoreable format
./run_evaluate_scene_based_test.sh # calculating AP and p@1

 


后面的,还没搭建.........

  • 2
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值