Can someone please explain me that in java why
if(0.6 <= 0.6f ) System.out.printf("true");
else System.out.printf("false");
this PRINTS true
but
if(0.7 <= 0.7f ) System.out.printf("true");
else System.out.printf("false");
this PRINTS false
Is it related to IEEE 754 standards when floating point number is converted to double for comparision?
can someone explain it in detail the exact working?
解决方案
Sure - it's just a matter of understanding that none of 0.6, 0.6f, 0.7 and 0.7f are those exact values. They're the closest representable approximations in the appropriate type. The exact values which are stored for those 4 values are:
0.6f => 0.60000002384185791015625
0.6 => 0.59999999999999997779553950749686919152736663818359375
0.7f => 0.699999988079071044921875
0.7 => 0.6999999999999999555910790149937383830547332763671875
With that information, it's clear why you're getting the results you are.
To think of it another way, imagine you had two decimal floating point types, one with 4 digits of precision and one with 8 digits of precision. Now let's look at how 1/3 and 2/3 would be represented:
1/3, 4dp => 0.3333
1/3, 8dp => 0.33333333
2/3, 4dp => 0.6667
2/3, 8dp => 0.66666667
So in this case the lower-precision value is smaller than the higher-precision one for 1/3, but that's reversed for 2/3. It's the same sort of thing for float and double, just in binary.