OpenCV 利用getTickCount()与getTickFrequency()计算执行时间

计时函数详解
本文介绍了计算机视觉库中用于测量函数执行时间的两个关键函数getTickCount()和getTickFrequency()。getTickCount()返回从操作系统启动到当前所经历的计时周期数,而getTickFrequency()返回CPU的频率。通过这两个函数可以准确地测量代码片段的执行时间。

其实这是个很简单的应用,贴出来是因为我经常能用到这两个函数,顺便写一下吧。

double t1 = (double)getTickCount();
.
.
.
double t2 = (double)getTickCount();
cout<<"time:"<<(t2-t1)*1000/(getTickFrequency())<<endl;

getTickCount()与getTickFrequency()都被定义在core.hpp文件下:

//! Returns the number of ticks.

/*!
  The function returns the number of ticks since the certain event (e.g. when the machine was turned on).
  It can be used to initialize cv::RNG or to measure a function execution time by reading the tick count
  before and after the function call. The granularity of ticks depends on the hardware and OS used. Use
  cv::getTickFrequency() to convert ticks to seconds.
*/
CV_EXPORTS_W int64 getTickCount();

/*!
  Returns the number of ticks per seconds.

  The function returns the number of ticks (as returned by cv::getTickCount()) per second.
  The following code computes the execution time in milliseconds:

  \code
  double exec_time = (double)getTickCount();
  // do something ...
  exec_time = ((double)getTickCount() - exec_time)*1000./getTickFrequency();
  \endcode
*/
CV_EXPORTS_W double getTickFrequency();

/*!
  Returns the number of CPU ticks.

  On platforms where the feature is available, the function returns the number of CPU ticks
  since the certain event (normally, the system power-on moment). Using this function
  one can accurately measure the execution time of very small code fragments,
  for which cv::getTickCount() granularity is not enough.
*/
CV_EXPORTS_W int64 getCPUTickCount();

getTickCount():用于返回从操作系统启动到当前所经的计时周期数,看名字也很好理解,get Tick Count(s)。
getTickFrequency():用于返回CPU的频率。get Tick Frequency。这里的单位是秒,也就是一秒内重复的次数。

所以剩下的就很清晰了:
总次数/一秒内重复的次数 = 时间(s)
1000 *总次数/一秒内重复的次数= 时间(ms)

这个逻辑很清晰,没什么问题,但是这里有一个小坑,那就是C版本的cvGetTickFrequency()函数和C++版本的getTickFrequency()的单位不一样,前者以ms计算频率,后者以s为单位计算频率,所以如果使用C版本的cvGetTickFrequency()计算时间的话,应该是:
总次数/一秒内重复的次数1000 = 时间(ms)
总次数/一秒内重复的次数
1000000 = 时间(s)

### 如何优化OpenCV性能以提升处理速度 #### 多线程处理 为了显著提高图像处理的速度,可以采用多线程技术来并行化计算密集型操作。通过合理分配任务给多个CPU核心,能够有效减少整体运行时间[^1]。 ```python import threading import cv2 as cv def process_image(image_path): img = cv.imread(image_path) result = cv.medianBlur(img, 5) # Example processing function return result threads = [] for i in range(num_threads): thread = threading.Thread(target=process_image, args=(image_paths[i],)) threads.append(thread) thread.start() for thread in threads: thread.join() ``` #### 利用内置优化选项 OpenCV自带了一些预设的优化措施,默认情况下这些设置可能未被激活。确保启用了SIMD指令集支持和其他编译器级别的加速特性可以帮助进一步加快执行效率[^3]。 #### 减少不必要的内存拷贝 频繁的数据复制会消耗大量资源,在设计程序逻辑时应尽量避免这种情况的发生。比如当读取文件或者传递参数给函数调用的时候考虑使用指针而不是实际对象副本[^2]。 #### 测量分析瓶颈所在 在尝试任何具体的提速手段之前,先利用`cv.getTickCount()` 和 `time.time()` 对现有流程进行全面的时间统计和剖析是非常必要的。这有助于识别出最耗时的部分从而集中精力对其进行针对性改造。 ```python import cv2 as cv import time img = cv.imread('path_to_image') start_tick = cv.getTickCount() start_time = time.time() # Perform operations on the image here... end_tick = cv.getTickCount() end_time = time.time() tick_duration = (end_tick - start_tick) / cv.getTickFrequency() real_duration = end_time - start_time print(f'Tick-based duration: {tick_duration}') print(f'Real-time duration: {real_duration}') ```
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值