I have a list of floats (actually it's a pandas Series object, if it changes anything) which looks like this:
mySeries:
...
22 16.0
23 14.0
24 12.0
25 10.0
26 3.1
...
(So elements of this Series are on the right, indices on the left.) Then I'm trying to assign the elements from this Series as keys in a dictionary, and indices as values, like this:
{ mySeries[i]: i for i in mySeries.index }
and I'm getting pretty much what I wanted, except that...
{ 6400.0: 0, 66.0: 13, 3.1000000000000001: 23, 133.0: 10, ... }
Why has 3.1 suddenly changed into 3.1000000000000001? I guess this has something to do with the way the floating point numbers are represented (?) but why does it happen now and how do I avoid/fix it?
EDIT: Please feel free to suggest a better title for this question if it's inaccurate.
EDIT2: Ok, so it seems that it's the exact same number, just printed differently. Still, if I assign mySeries[26] as a dictionary key and then I try to run:
myDict[mySeries[26]]
I get KeyError. What's the best way to avoid it?
解决方案
The dictionary isn't changing the floating point representation of 3.1, but it is actually displaying the full precision. Your print of mySeries[26] is truncating the precision and showing an approximation.
You can prove this:
pd.set_option('precision', 20)
Then view mySeries.
0 16.00000000000000000000
1 14.00000000000000000000
2 12.00000000000000000000
3 10.00000000000000000000
4 3.10000000000000008882
dtype: float64
EDIT:
EDIT:
Regarding the KeyError, I was not able to replicate the problem.
>> x = pd.Series([16,14,12,10,3.1])
>> a = {x[i]: i for i in x.index}
>> a[x[4]]
4
>> a.keys()
[16.0, 10.0, 3.1000000000000001, 12.0, 14.0]
>> hash(x[4])
2093862195
>> hash(a.keys()[2])
2093862195