FLT_EPSILON 的值为 1.192092896e-07F 的缘由,大于0的最小浮点数

浮点数精度与误差处理
本文探讨了计算机中浮点数表示的局限性及由此产生的计算误差问题。介绍了如何使用FLT_EPSILON和DBL_EPSILON来判断浮点数是否相等,并提供了一个示例程序来展示实际应用。

参考至:

http://bbs.csdn.net/topics/320026823

https://msdn.microsoft.com/en-us/library/c151dt3s(VS.71).aspx

加上我上一篇说明浮点数表示方法 :

http://blog.csdn.net/l773575310/article/details/52788866

(epsilon,希腊语字母的第五个字母)



有时候计算机判断(1.0 == 10.0 / 10.0)的时候,并不总是返回true。因为浮点数计算可能会产生一定点误差,而这误差是无法避免的。正确的做法是判断两个值是否在这个误差范围内,而误差范围就是从正FLT_EPSILON 到负FLT_EPSILON 内。(1.0 > (10.0 / 10.0) - FLT_EPSILON && 1.0 < (10.0 / 10.0) + FLT_EPSILON)。


FLT_EPSILON 在 float.h 头文件 宏定义如下


#define FLT_EPSILON  1.192092896e-07F // smallest such that 1.0+FLT_EPSILON != 1.0


后面的注释可以理解为 这个数是 能使 (1.0 + FLT_EPSILON != 0) 成立的最小的数

也就是说 小于FLT_EPSILON 的数 加上 1.0 都等于 1.0


浮点数 1.0 表示为

符号位 0 (1位), 阶码 27-1 + 0 (8位), 尾数 1000 0000 0000 0000 0000 0000 (23位 尾数第一个省略 ,参考上一篇)

0 0111 1111 000 0000 0000 0000 0000 0000

最小能大于1.0的数 就是 3f80 0001 

最后这个1 就是 2-23 的 

也就是(卧槽,windows10自带计算器,FLT_EPSILON 精确好多) 

个人理解 : 这个宏 FLT_EPSILON 就是因为计算机不能精确存储二进制浮点数 ,在计算两个值

如 参考上面链接的提问

int main()
{
   float a, b, c;
   a = 1.345f;
   b = 1.123f;
   c = a + b;
   // if (FLOAT_EQ(c, 2.468))   // Remove comment for correct result
   if (c == 2.468)              // Comment this line for correct result
   printf("They are equal.\n");
else
   printf("They are not equal! The value of c is %13.10f,or %f",c,c);
}
结果是

They are not equal! The value of c is 2.4679999352 or 2.468000.

For EPSILON, you can use the constants FLT_EPSILON, which is defined for float as 1.192092896e-07F, or DBL_EPSILON, which is defined for double as 2.2204460492503131e-016. You need to include float.h for these constants. These constants are defined as the smallest positive number x, such that x+1.0 is not equal to 1.0. Because this is a very small number, you should employ user-defined tolerance for calculations involving very large numbers.


需要一个容差(就是FLT_EPSILON )来弥补精度问题引发偏差


PS:双精度型也有

#define
DBL_EPSILON      2.2204460492503131e-016 // smallest such that 1.0+DBL_EPSILON != 1.0

S32 md_suppression_with_fg_density(AMS_DEV dev, U8 status, U8 sensitivity, int object_num, U32 *blobs_area, RECT *blobs, U8 low_light_md_run) { MD_COMMON_CONTEXT *md_common_context = &(g_md_context[dev].md_common_context); U32 flt_object_num = 0; U32 flt_object_num_re = 0; float area_ratio, diff_area_ratio, height_by_width; OBJECT *md_object = md_common_context->obj_desc.object; MOTION_BOX motion_box[MD_MAX_OBJECT_NUM]; GRADI_CONTEXT grad_context[MD_MAX_OBJECT_NUM]; U8 rain_snow_pass_index[MD_MAX_OBJECT_NUM] = { 0 }; /* The target serial number of the rain and snow judgment */ U8 rs_pass_or_not[MD_MAX_OBJECT_NUM] = { 0 }; /* Whether the corresponding target passes the rain and snow judgment: 0 failed, 1 passed */ VFRAME *vframe = md_common_context->md_sub_src_y; GMM_CONTEXT *gmm_context = &(md_common_context->gmm_context); U8 *large_object_queue_update = &(md_common_context->large_object_queue_update); U8 *large_object_fake_flag = &(md_common_context->large_object_fake_flag); MD_SENSITIVITY_LEVEL *md_sensitivity_config = &(md_common_context->md_sensitivity_config); OBJECT_QUEUE *object_queue = &(md_common_context->object_queue); memset(motion_box, 0, sizeof(MOTION_BOX) * MD_MAX_OBJECT_NUM); memset(grad_context, 0, sizeof(GRADI_CONTEXT) * MD_MAX_OBJECT_NUM); if (low_light_md_run) { md_object = md_common_context->dark_obj_desc.object; md_sensitivity_config = &(md_common_context->md_dark_sensitivity_config); object_queue = &(md_common_context->dark_object_queue); large_object_queue_update = &(md_common_context->dark_large_object_queue_update); large_object_fake_flag = &(md_common_context->dark_large_object_fake_flag); } if ((md_common_context->ir_state != 0) || (md_common_context->wl_state != 0)) { calcu_grad(gmm_context->width, gmm_context->height, vframe, gmm_context->fg_refine_data, blobs, &object_num, grad_context); } for (int i = 0; i < object_num; i++) { if (blobs_area[i] >= md_sensitivity_config->md_blob_area_thresh) { S32 rect_width = blobs[i].right - blobs[i].left; S32 rect_height = blobs[i].bottom - blobs[i].top; if (rect_width % 4 != 0) { int expand_width = 4 - rect_width % 4; if ((blobs[i].right + expand_width) > md_common_context->md_width) { if (blobs[i].left - expand_width < 0) { blobs[i].left = 0; blobs[i].right = md_common_context->md_width; } else { blobs[i].left -= expand_width; } } else { blobs[i].right += expand_width; } } if (rect_height % 4 != 0) { int expand_height = 4 - rect_height % 4; if ((blobs[i].bottom + expand_height) > md_common_context->md_height) { if (blobs[i].top - expand_height < 0) { blobs[i].top = 0; blobs[i].bottom = md_common_context->md_height; } else { blobs[i].top -= expand_height; } } else { blobs[i].bottom += expand_height; } } rect_width = blobs[i].right - blobs[i].left; rect_height = blobs[i].bottom - blobs[i].top; area_ratio = count_blob_fg_sum(gmm_context->fg_corse_data, md_common_context->block_sum, md_common_context->md_width, md_common_context->md_height, blobs[i].left, blobs[i].top, rect_width, rect_height, md_common_context->md_block_fg_sum_thresh); U16 rect_x_fg = blobs[i].left * md_common_context->fg_width / md_common_context->md_width; U16 rect_y_fg = blobs[i].top * md_common_context->fg_height / md_common_context->md_height; U16 rect_width_fg = rect_width * md_common_context->fg_width / md_common_context->md_width; U16 rect_height_fg = rect_height * md_common_context->fg_height / md_common_context->md_height; diff_area_ratio = count_blob_fg_num(md_common_context->fg_diff_corse_data, md_common_context->fg_width, rect_x_fg, rect_y_fg, rect_width_fg, rect_height_fg); height_by_width = 1.0 * rect_height / rect_width; /* Solve the problem of underreporting of large targets at close range */ if (((blobs_area[i] >= md_common_context->large_object_area_thresh && (blobs_area[i] <= 0.6 * md_common_context->md_width * md_common_context->md_height)) && (rect_height >= 0.6 * md_common_context->md_height)) && (area_ratio >= md_common_context->large_object_fg_area_ratio) && (diff_area_ratio >= 0.01)) { motion_box[flt_object_num].index = i; motion_box[flt_object_num].motion_rect.left = blobs[i].left; motion_box[flt_object_num].motion_rect.top = blobs[i].top; motion_box[flt_object_num].motion_rect.right = blobs[i].right; motion_box[flt_object_num].motion_rect.bottom = blobs[i].bottom; motion_box[flt_object_num].area = blobs_area[i]; motion_box[flt_object_num].big_object = 1; flt_object_num++; continue; } if ((!low_light_md_run) && (sensitivity < 80 || blobs_area[i] < 200) && ((height_by_width >= 2.0 && (area_ratio >= md_sensitivity_config->pd_fg_area_ratio)) || (height_by_width <= 0.5 && (area_ratio >= md_sensitivity_config->ve_fg_area_ratio)) || (height_by_width > 0.5 && height_by_width < 2.0 && (area_ratio >= md_sensitivity_config->other_fg_area_ratio))) && diff_area_ratio >= 0.01) { motion_box[flt_object_num].index = i; motion_box[flt_object_num].motion_rect.left = blobs[i].left; motion_box[flt_object_num].motion_rect.top = blobs[i].top; motion_box[flt_object_num].motion_rect.right = blobs[i].right; motion_box[flt_object_num].motion_rect.bottom = blobs[i].bottom; motion_box[flt_object_num].area = blobs_area[i]; motion_box[flt_object_num].big_object = 0; flt_object_num++; } else if ((!low_light_md_run) && (sensitivity >= 80 && blobs_area[i] >= 200) && ((height_by_width >= 2.0 && (area_ratio >= (md_sensitivity_config->pd_fg_area_ratio + 0.07))) || (height_by_width <= 0.5 && (area_ratio >= (md_sensitivity_config->ve_fg_area_ratio + 0.07))) || (height_by_width > 0.5 && height_by_width < 2.0 && (area_ratio >= (md_sensitivity_config->other_fg_area_ratio + 0.07)))) && diff_area_ratio >= 0.01) { motion_box[flt_object_num].index = i; motion_box[flt_object_num].motion_rect.left = blobs[i].left; motion_box[flt_object_num].motion_rect.top = blobs[i].top; motion_box[flt_object_num].motion_rect.right = blobs[i].right; motion_box[flt_object_num].motion_rect.bottom = blobs[i].bottom; motion_box[flt_object_num].area = blobs_area[i]; motion_box[flt_object_num].big_object = 0; flt_object_num++; } if ((low_light_md_run) && ((height_by_width >= 2.0 && (area_ratio >= md_sensitivity_config->pd_fg_area_ratio)) || (height_by_width <= 0.5 && (area_ratio >= md_sensitivity_config->ve_fg_area_ratio)) || (height_by_width > 0.5 && height_by_width < 2.0 && (area_ratio >= md_sensitivity_config->other_fg_area_ratio))) && diff_area_ratio >= 0.01) { motion_box[flt_object_num].index = i; motion_box[flt_object_num].motion_rect.left = blobs[i].left; motion_box[flt_object_num].motion_rect.top = blobs[i].top; motion_box[flt_object_num].motion_rect.right = blobs[i].right; motion_box[flt_object_num].motion_rect.bottom = blobs[i].bottom; motion_box[flt_object_num].area = blobs_area[i]; motion_box[flt_object_num].big_object = 0; flt_object_num++; } } } memset(rs_pass_or_not, 0, sizeof(rs_pass_or_not)); /* Rain and snow judgment */ if ((md_common_context->ir_state != 0) || (md_common_context->wl_state != 0)) { rain_snow_deal(md_common_context->md_width, md_common_context->md_height, status, flt_object_num, grad_context, motion_box, rain_snow_pass_index, &flt_object_num_re, rs_pass_or_not, MD_MODE); for (int i = 0; i < flt_object_num_re; i++) { md_object[i].bounding.left = motion_box[rain_snow_pass_index[i]].motion_rect.left; md_object[i].bounding.top = motion_box[rain_snow_pass_index[i]].motion_rect.top; md_object[i].bounding.right = motion_box[rain_snow_pass_index[i]].motion_rect.right; md_object[i].bounding.bottom = motion_box[rain_snow_pass_index[i]].motion_rect.bottom; md_object[i].center.x = (S16)((md_object[i].bounding.left + md_object[i].bounding.right) / 2.0 + 0.5); md_object[i].center.y = (S16)((md_object[i].bounding.top + md_object[i].bounding.bottom) / 2.0 + 0.5); } } for (int i = 0; i < flt_object_num; i++) { if ((!rs_pass_or_not[i]) && ((md_common_context->ir_state != 0) || (md_common_context->wl_state != 0))) { continue; } // If you have entered the Rain and Snow Judgment module, the big object of the big target that passes the rain and snow judgment should be set to 2 if (((motion_box[i].big_object == 2) && ((md_common_context->ir_state != 0) || (md_common_context->wl_state != 0))) || (motion_box[i].big_object == 1 && md_common_context->ir_state == 0 && md_common_context->wl_state == 0)) { if (object_queue_is_full(object_queue)) { object_queue_delete(object_queue); } OBJECT_T object_t; object_t.obj.bounding.left = motion_box[i].motion_rect.left; object_t.obj.bounding.top = motion_box[i].motion_rect.top; object_t.obj.bounding.right = motion_box[i].motion_rect.right; object_t.obj.bounding.bottom = motion_box[i].motion_rect.bottom; object_queue_insert(&object_t, object_queue); *large_object_queue_update = 1; } if (*large_object_queue_update) { *large_object_fake_flag = 0; continue; } else { (*large_object_fake_flag)++; if (*large_object_fake_flag >= 3) { object_queue_reset(object_queue); } } if ((md_common_context->ir_state == 0) && (md_common_context->wl_state == 0))// Non-IR white light entry (IR white light has done this in the Rain and Snow Judgment module) { md_object[flt_object_num_re].bounding.left = motion_box[i].motion_rect.left; md_object[flt_object_num_re].bounding.top = motion_box[i].motion_rect.top; md_object[flt_object_num_re].bounding.right = motion_box[i].motion_rect.right; md_object[flt_object_num_re].bounding.bottom = motion_box[i].motion_rect.bottom; md_object[flt_object_num_re].center.x = (S16)((md_object[flt_object_num_re].bounding.left + md_object[flt_object_num_re].bounding.right) / 2.0 + 0.5); md_object[flt_object_num_re].center.y = (S16)((md_object[flt_object_num_re].bounding.top + md_object[flt_object_num_re].bounding.bottom) / 2.0 + 0.5); flt_object_num_re++; } } if (low_light_md_run) { md_common_context->dark_obj_desc.obj_num = flt_object_num_re; } else { md_common_context->obj_desc.obj_num = flt_object_num_re; } return OK; } 请分析代码功能
最新发布
09-25
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值