# python熊猫图案_Python熊猫：您可能不知道的技巧和功能

python熊猫图案

Pandas is a foundational library for analytics, data processing, and data science. It’s a huge project with tons of optionality and depth.

This tutorial will cover some lesser-used but idiomatic Pandas capabilities that lend your code better readability, versatility, and speed, à la the Buzzfeed listicle.

If you feel comfortable with the core concepts of Python’s Pandas library, hopefully you’ll find a trick or two in this article that you haven’t stumbled across previously. (If you’re just starting out with the library, 10 Minutes to Pandas is a good place to start.)

Note: The examples in this article are tested with Pandas version 0.23.2 and Python 3.6.6. However, they should also be valid in older versions.

## 1.在解释器启动时配置选项和设置(1. Configure Options & Settings at Interpreter Startup)

You may have run across Pandas’ rich options and settings system before.

It’s a huge productivity saver to set customized Pandas options at interpreter startup, especially if you work in a scripting environment. You can use pd.set_option() to configure to your heart’s content with a Python or IPython startup file.

The options use a dot notation such as pd.set_option('display.max_colwidth', 25), which lends itself well to a nested dictionary of options:

 import import pandas pandas as as pd

pd

def def startstart ():
():
options options = = {
{
'display''display' : : {
{
'max_columns''max_columns' : : NoneNone ,
,
'max_colwidth''max_colwidth' : : 2525 ,
,
'expand_frame_repr''expand_frame_repr' : : FalseFalse ,  ,  # Don't wrap to multiple pages
# Don't wrap to multiple pages
'max_rows''max_rows' : : 1414 ,
,
'max_seq_items''max_seq_items' : : 5050 ,         ,         # Max length of printed sequence
# Max length of printed sequence
'precision''precision' : : 44 ,
,
'show_dimensions''show_dimensions' : : False
False
},
},
'mode''mode' : : {
{
'chained_assignment''chained_assignment' : : None   None   # Controls SettingWithCopyWarning
# Controls SettingWithCopyWarning
}
}
}

}

for for categorycategory , , option option in in optionsoptions .. itemsitems ():
():
for for opop , , value value in in optionoption .. itemsitems ():
():
pdpd .. set_optionset_option (( ff '' {category}{category} .. {op}{op} '' , , valuevalue )  )  # Python 3.6+

# Python 3.6+

if if __name__ __name__ == == '__main__''__main__' :
:
startstart ()
()
del del start  start  # Clean up namespace in the interpreter
# Clean up namespace in the interpreter


If you launch an interpreter session, you’ll see that everything in the startup script has been executed, and Pandas is imported for you automatically with your suite of options:

Let’s use some data on abalone hosted by the UCI Machine Learning Repository to demonstrate the formatting that was set in the startup file. The data will truncate at 14 rows with 4 digits of precision for floats:

 >>> >>>  url url = = (( 'https://archive.ics.uci.edu/ml/'
'https://archive.ics.uci.edu/ml/'
...        ...        'machine-learning-databases/abalone/abalone.data''machine-learning-databases/abalone/abalone.data' )
)
>>> >>>  cols cols = = [[ 'sex''sex' , , 'length''length' , , 'diam''diam' , , 'height''height' , , 'weight''weight' , , 'rings''rings' ]
]
>>> >>>  abalone abalone = = pdpd .. read_csvread_csv (( urlurl , , usecolsusecols == [[ 00 , , 11 , , 22 , , 33 , , 44 , , 88 ], ], namesnames == colscols )

)

>>> >>>  abalone
abalone
sex  length   diam  height  weight  rings
sex  length   diam  height  weight  rings
0      M   0.455  0.365   0.095  0.5140     15
0      M   0.455  0.365   0.095  0.5140     15
1      M   0.350  0.265   0.090  0.2255      7
1      M   0.350  0.265   0.090  0.2255      7
2      F   0.530  0.420   0.135  0.6770      9
2      F   0.530  0.420   0.135  0.6770      9
3      M   0.440  0.365   0.125  0.5160     10
3      M   0.440  0.365   0.125  0.5160     10
4      I   0.330  0.255   0.080  0.2050      7
4      I   0.330  0.255   0.080  0.2050      7
5      I   0.425  0.300   0.095  0.3515      8
5      I   0.425  0.300   0.095  0.3515      8
6      F   0.530  0.415   0.150  0.7775     20
6      F   0.530  0.415   0.150  0.7775     20
...   ...   ..     ..     ...    ...    ...     ...     ...     ...     ...    ...    ...
...
4170   M   0.550  0.430   0.130  0.8395     10
4170   M   0.550  0.430   0.130  0.8395     10
4171   M   0.560  0.430   0.155  0.8675      8
4171   M   0.560  0.430   0.155  0.8675      8
4172   F   0.565  0.450   0.165  0.8870     11
4172   F   0.565  0.450   0.165  0.8870     11
4173   M   0.590  0.440   0.135  0.9660     10
4173   M   0.590  0.440   0.135  0.9660     10
4174   M   0.600  0.475   0.205  1.1760      9
4174   M   0.600  0.475   0.205  1.1760      9
4175   F   0.625  0.485   0.150  1.0945     10
4175   F   0.625  0.485   0.150  1.0945     10
4176   M   0.710  0.555   0.195  1.9485     12
4176   M   0.710  0.555   0.195  1.9485     12


You’ll see this dataset pop up in other examples later as well.

## 2.使用熊猫的测试模块制作玩具数据结构(2. Make Toy Data Structures With Pandas’ Testing Module)

Hidden way down in Pandas’ testing module are a number of convenient functions for quickly building quasi-realistic Series and DataFrames:

There are around 30 of these, and you can see the full list by calling dir() on the module object. Here are a few:

 >>> >>>  [[ i i for for i i in in dirdir (( tmtm ) ) if if ii .. startswithstartswith (( 'make''make' )]
)]
['makeBoolIndex',
['makeBoolIndex',
'makeCategoricalIndex',
'makeCategoricalIndex',
'makeCustomDataframe',
'makeCustomDataframe',
'makeCustomIndex',
'makeCustomIndex',
# ...,
# ...,
'makeTimeSeries',
'makeTimeSeries',
'makeTimedeltaIndex',
'makeTimedeltaIndex',
'makeUIntIndex',
'makeUIntIndex',
'makeUnicodeIndex']
'makeUnicodeIndex']


These can be useful for benchmarking, testing assertions, and experimenting with Pandas methods that you are less familiar with.

## 3.利用访问器方法(3. Take Advantage of Accessor Methods)

Perhaps you’ve heard of the term accessor, which is somewhat like a getter (although getters and setters are used infrequently in Python). For our purposes here, you can think of a Pandas accessor as a property that serves as an interface to additional methods.

Pandas Series have three of them:

Yes, that definition above is a mouthful, so let’s take a look at a few examples before discussing the internals.

.cat is for categorical data, .str is for string (object) data, and .dt is for datetime-like data. Let’s start off with .str: imagine that you have some raw city/state/ZIP data as a single field within a Pandas Series.

.cat用于分类数据， .str用于字符串（对象）数据， .dt用于类似日期时间的数据。 让我们从.str开始：假设您有一些原始的城市/州/邮政编码数据作为“熊猫系列”中的单个字段。

Pandas string methods are vectorized, meaning that they operate on the entire array without an explicit for-loop:

Pandas字符串方法是矢量化的 ，这意味着它们可以在整个数组上运行而无需显式的for循环：

 >>> >>>  addr addr = = pdpd .. SeriesSeries ([
([
...     ...     'Washington, D.C. 20003''Washington, D.C. 20003' ,
,
...     ...     'Brooklyn, NY 11211-1755''Brooklyn, NY 11211-1755' ,
,
...     ...     'Omaha, NE 68154''Omaha, NE 68154' ,
,
...     ...     'Pittsburgh, PA 15211'
'Pittsburgh, PA 15211'
... ...  ])

])

()
0     WASHINGTON, D.C. 20003
0     WASHINGTON, D.C. 20003
1    BROOKLYN, NY 11211-1755
1    BROOKLYN, NY 11211-1755
2            OMAHA, NE 68154
2            OMAHA, NE 68154
3       PITTSBURGH, PA 15211
3       PITTSBURGH, PA 15211
dtype: object

dtype: object

>>> >>>  addraddr .. strstr .. countcount (( rr 'd''d' )  )  # 5 or 9-digit zip?
# 5 or 9-digit zip?
0    5
0    5
1    9
1    9
2    5
2    5
3    5
3    5
dtype: int64
dtype: int64


For a more involved example, let’s say that you want to separate out the three city/state/ZIP components neatly into DataFrame fields.

You can pass a regular expression to .str.extract() to “extract” parts of each cell in the Series. In .str.extract(), .str is the accessor, and .str.extract() is an accessor method:

This also illustrates what is known as method-chaining, where .str.extract(regex) is called on the result of addr.str.replace('.', ''), which cleans up use of periods to get a nice 2-character state abbreviation.

It’s helpful to know a tiny bit about how these accessor methods work as a motivating reason for why you should use them in the first place, rather than something like addr.apply(re.findall, ...).

Each accessor is itself a bona fide Python class:

These standalone classes are then “attached” to the Series class using a CachedAccessor. It is when the classes are wrapped in CachedAccessor that a bit of magic happens.

CachedAccessor is inspired by a “cached property” design: a property is only computed once per instance and then replaced by an ordinary attribute. It does this by overloading the .__get__() method, which is part of Python’s descriptor protocol.

CachedAccessor受“缓存属性”设计的启发：每个实例仅计算一次属性，然后将其替换为普通属性。 它通过重载.__get__()方法来实现此目的，该方法是Python的描述符协议的一部分。

Note: If you’d like to read more about the internals of how this works, see the Python Descriptor HOWTO and this post on the cached property design. Python 3 also introduced functools.lru_cache(), which offers similar functionality.

：如果您想了解更多关于这个作品，是如何看待的内部Python的描述符HOWTO这个职位上缓存的性能设计。 Python 3还引入了functools.lru_cache() ，它提供了类似的功能。

The second accessor, .dt, is for datetime-like data. It technically belongs to Pandas’ DatetimeIndex, and if called on a Series, it is converted to a DatetimeIndex first:

 >>> >>>  daterng daterng = = pdpd .. SeriesSeries (( pdpd .. date_rangedate_range (( '2017''2017' , , periodsperiods == 99 , , freqfreq == 'Q''Q' ))
))
>>> >>>  daterng
daterng
0   2017-03-31
0   2017-03-31
1   2017-06-30
1   2017-06-30
2   2017-09-30
2   2017-09-30
3   2017-12-31
3   2017-12-31
4   2018-03-31
4   2018-03-31
5   2018-06-30
5   2018-06-30
6   2018-09-30
6   2018-09-30
7   2018-12-31
7   2018-12-31
8   2019-03-31
8   2019-03-31
dtype: datetime64[ns]

dtype: datetime64[ns]

>>>  >>>  daterngdaterng .. dtdt .. day_nameday_name ()
()
0      Friday
0      Friday
1      Friday
1      Friday
2    Saturday
2    Saturday
3      Sunday
3      Sunday
4    Saturday
4    Saturday
5    Saturday
5    Saturday
6      Sunday
6      Sunday
7      Monday
7      Monday
8      Sunday
8      Sunday
dtype: object

dtype: object

>>> >>>  # Second-half of year only
# Second-half of year only
>>> >>>  daterngdaterng [[ daterngdaterng .. dtdt .. quarter quarter > > 22 ]
]
2   2017-09-30
2   2017-09-30
3   2017-12-31
3   2017-12-31
6   2018-09-30
6   2018-09-30
7   2018-12-31
7   2018-12-31
dtype: datetime64[ns]

dtype: datetime64[ns]

>>> >>>  daterngdaterng [[ daterngdaterng .. dtdt .. is_year_endis_year_end ]
]
3   2017-12-31
3   2017-12-31
7   2018-12-31
7   2018-12-31
dtype: datetime64[ns]
dtype: datetime64[ns]


The third accessor, .cat, is for Categorical data only, which you’ll see shortly in its own section.

## 4.从组件列创建DatetimeIndex(4. Create a DatetimeIndex From Component Columns)

Speaking of datetime-like data, as in daterng above, it’s possible to create a Pandas DatetimeIndex from multiple component columns that together form a date or datetime:

Finally, you can drop the old individual columns and convert to a Series:

 >>> >>>  df df = = dfdf .. dropdrop (( datecolsdatecols , , axisaxis == 11 )) .. squeezesqueeze ()
()
()
2017-01-01   -0.0767
2017-01-01   -0.0767
2017-01-02   -1.2798
2017-01-02   -1.2798
2017-01-03    0.4032
2017-01-03    0.4032
2017-02-01    1.2377
2017-02-01    1.2377
2017-02-02   -0.2060
2017-02-02   -0.2060
Name: data, dtype: float64

Name: data, dtype: float64

>>> >>>  dfdf .. indexindex .. dtype_str
dtype_str
'datetime64[ns]
'datetime64[ns]


The intuition behind passing a DataFrame is that a DataFrame resembles a Python dictionary where the column names are keys, and the individual columns (Series) are the dictionary values. That’s why pd.to_datetime(df[datecols].to_dict(orient='list')) would also work in this case. This mirrors the construction of Python’s datetime.datetime, where you pass keyword arguments such as datetime.datetime(year=2000, month=1, day=15, hour=10).

## 5.使用分类数据节省时间和空间(5. Use Categorical Data to Save on Time and Space)

One powerful Pandas feature is its Categorical dtype.

Even if you’re not always working with gigabytes of data in RAM, you’ve probably run into cases where straightforward operations on a large DataFrame seem to hang up for more than a few seconds.

Pandas object dtype is often a great candidate for conversion to category data. (object is a container for Python str, heterogeneous data types, or “other” types.) Strings occupy a significant amount of space in memory:

Note: I used sys.getsizeof() to show the memory occupied by each individual value in the Series. Keep in mind these are Python objects that have some overhead in the first place. (sys.getsizeof('') will return 49 bytes.)

There is also colors.memory_usage(), which sums up the memory usage and relies on the .nbytes attribute of the underlying NumPy array. Don’t get too bogged down in these details: what is important is relative memory usage that results from type conversion, as you’ll see next.

Now, what if we could take the unique colors above and map each to a less space-hogging integer? Here is a naive implementation of that:

 >>> >>>  mapper mapper = = {{ vv : : k k for for kk , , v v in in enumerateenumerate (( colorscolors .. uniqueunique ())}
())}
>>> >>>  mapper
mapper
{'periwinkle': 0, 'mint green': 1, 'burnt orange': 2, 'rose': 3, 'navy': 4}

{'periwinkle': 0, 'mint green': 1, 'burnt orange': 2, 'rose': 3, 'navy': 4}

>>> >>>  as_int as_int = = colorscolors .. mapmap (( mappermapper )
)
>>> >>>  as_int
as_int
0    0
0    0
1    1
1    1
2    2
2    2
3    0
3    0
4    2
4    2
5    3
5    3
6    3
6    3
7    1
7    1
8    3
8    3
9    4
9    4
dtype: int64

dtype: int64

>>> >>>  as_intas_int .. applyapply (( syssys .. getsizeofgetsizeof )
)
0    24
0    24
1    28
1    28
2    28
2    28
3    24
3    24
4    28
4    28
5    28
5    28
6    28
6    28
7    28
7    28
8    28
8    28
9    28
9    28
dtype: int64
dtype: int64


Note: Another way to do this same thing is with Pandas’ pd.factorize(colors):

Either way, you are encoding the object as an enumerated type (categorical variable).

You’ll notice immediately that memory usage is just about cut in half compared to when the full strings are used with object dtype.

Earlier in the section on accessors, I mentioned the .cat (categorical) accessor. The above with mapper is a rough illustration of what is happening internally with Pandas’ Categorical dtype:

“The memory usage of a Categorical is proportional to the number of categories plus the length of the data. In contrast, an object dtype is a constant times the length of the data.” (Source)

Categorical的内存使用量与Categorical的数量加上数据的长度成正比。 相反， object dtype是数据长度的常数倍。” （资源）

In colors above, you have a ratio of 2 values for every unique value (category):

 >>> >>>  lenlen (( colorscolors ) ) / / colorscolors .. nuniquenunique ()
()
2.0
2.0


As a result, the memory savings from converting to Categorical is good, but not great:

However, if you blow out the proportion above, with a lot of data and few unique values (think about data on demographics or alphabetic test scores), the reduction in memory required is over 10 times:

 >>> >>>  manycolors manycolors = = colorscolors .. repeatrepeat (( 1010 )
)
>>> >>>  lenlen (( manycolorsmanycolors ) ) / / manycolorsmanycolors .. nuniquenunique ()  ()  # Much greater than 2.0x
# Much greater than 2.0x
20.0

20.0

>>> >>>  manycolorsmanycolors .. memory_usagememory_usage (( indexindex == FalseFalse , , deepdeep == TrueTrue )
)
6500
6500
>>> >>>  manycolorsmanycolors .. astypeastype (( 'category''category' )) .. memory_usagememory_usage (( indexindex == FalseFalse , , deepdeep == TrueTrue )
)
585
585


A bonus is that computational efficiency gets a boost too: for categorical Series, the string operations are performed on the .cat.categories attribute rather than on each original element of the Series.

In other words, the operation is done once per unique category, and the results are mapped back to the values. Categorical data has a .cat accessor that is a window into attributes and methods for manipulating the categories:

In fact, you can reproduce something similar to the example above that you did manually:

 >>> >>>  ccolorsccolors .. catcat .. codes
codes
0    3
0    3
1    1
1    1
2    0
2    0
3    3
3    3
4    0
4    0
5    4
5    4
6    4
6    4
7    1
7    1
8    4
8    4
9    2
9    2
dtype: int8
dtype: int8


All that you need to do to exactly mimic the earlier manual output is to reorder the codes:

Notice that the dtype is NumPy’s int8, an 8-bit signed integer that can take on values from -127 to 128. (Only a single byte is needed to represent a value in memory. 64-bit signed ints would be overkill in terms of memory usage.) Our rough-hewn example resulted in int64 data by default, whereas Pandas is smart enough to downcast categorical data to the smallest numerical dtype possible.

Most of the attributes for .cat are related to viewing and manipulating the underlying categories themselves:

.cat大多数属性与查看和操作基础类别本身有关：

 >>> >>>  [[ i i for for i i in in dirdir (( ccolorsccolors .. catcat ) ) if if not not ii .. startswithstartswith (( '_''_' )]
)]
'as_ordered',
'as_ordered',
'as_unordered',
'as_unordered',
'categories',
'categories',
'codes',
'codes',
'ordered',
'ordered',
'remove_categories',
'remove_categories',
'remove_unused_categories',
'remove_unused_categories',
'rename_categories',
'rename_categories',
'reorder_categories',
'reorder_categories',
'set_categories']
'set_categories']


There are a few caveats, though. Categorical data is generally less flexible. For instance, if inserting previously unseen values, you need to add this value to a .categories container first:

If you plan to be setting values or reshaping data rather than deriving new computations, Categorical types may be less nimble.

## 6.通过迭代自检Groupby对象(6. Introspect Groupby Objects via Iteration)

When you call df.groupby('x'), the resulting Pandas groupby objects can be a bit opaque. This object is lazily instantiated and doesn’t have any meaningful representation on its own.

You can demonstrate with the abalone dataset from example 1:

 >>> >>>  abaloneabalone [[ 'ring_quartile''ring_quartile' ] ] = = pdpd .. qcutqcut (( abaloneabalone .. ringsrings , , qq == 44 , , labelslabels == rangerange (( 11 , , 55 ))
))
>>> >>>  grouped grouped = = abaloneabalone .. groupbygroupby (( 'ring_quartile''ring_quartile' )

)

>>> >>>  grouped
grouped
<pandas.core.groupby.groupby.DataFrameGroupBy object at 0x11c1169b0>
<pandas.core.groupby.groupby.DataFrameGroupBy object at 0x11c1169b0>


Alright, now you have a groupby object, but what is this thing, and how do I see it?

Before you call something like grouped.apply(func), you can take advantage of the fact that groupby objects are iterable:

Each “thing” yielded by grouped.__iter__() is a tuple of (name, subsetted object), where name is the value of the column on which you’re grouping, and subsetted object is a DataFrame that is a subset of the original DataFrame based on whatever grouping condition you specify. That is, the data gets chunked by group:

grouped.__iter__()产生的每个“事物”都是(name, subsetted object)的元组，其中name是您要分组的列的值，而子集的subsetted object是作为原始子集的DataFrame基于您指定的分组条件的DataFrame。 也就是说，数据按组分块：

 >>> >>>  for for idxidx , , frame frame in in groupedgrouped :
:
...     ...     printprint (( ff 'Ring quartile: {idx}''Ring quartile: {idx}' )
)
...     ...     printprint (( '-' '-' * * 1616 )
)
...     ...     printprint (( frameframe .. nlargestnlargest (( 33 , , 'weight''weight' ), ), endend == '' nnnn '' )
)
...
...
Ring quartile: 1
Ring quartile: 1
----------------
----------------
sex  length   diam  height  weight  rings ring_quartile
sex  length   diam  height  weight  rings ring_quartile
2619   M   0.690  0.540   0.185  1.7100      8             1
2619   M   0.690  0.540   0.185  1.7100      8             1
1044   M   0.690  0.525   0.175  1.7005      8             1
1044   M   0.690  0.525   0.175  1.7005      8             1
1026   M   0.645  0.520   0.175  1.5610      8             1

1026   M   0.645  0.520   0.175  1.5610      8             1

Ring quartile: 2
Ring quartile: 2
----------------
----------------
sex  length  diam  height  weight  rings ring_quartile
sex  length  diam  height  weight  rings ring_quartile
2811   M   0.725  0.57   0.190  2.3305      9             2
2811   M   0.725  0.57   0.190  2.3305      9             2
1426   F   0.745  0.57   0.215  2.2500      9             2
1426   F   0.745  0.57   0.215  2.2500      9             2
1821   F   0.720  0.55   0.195  2.0730      9             2

1821   F   0.720  0.55   0.195  2.0730      9             2

Ring quartile: 3
Ring quartile: 3
----------------
----------------
sex  length  diam  height  weight  rings ring_quartile
sex  length  diam  height  weight  rings ring_quartile
1209   F   0.780  0.63   0.215   2.657     11             3
1209   F   0.780  0.63   0.215   2.657     11             3
1051   F   0.735  0.60   0.220   2.555     11             3
1051   F   0.735  0.60   0.220   2.555     11             3
3715   M   0.780  0.60   0.210   2.548     11             3

3715   M   0.780  0.60   0.210   2.548     11             3

Ring quartile: 4
Ring quartile: 4
----------------
----------------
sex  length   diam  height  weight  rings ring_quartile
sex  length   diam  height  weight  rings ring_quartile
891    M   0.730  0.595    0.23  2.8255     17             4
891    M   0.730  0.595    0.23  2.8255     17             4
1763   M   0.775  0.630    0.25  2.7795     12             4
1763   M   0.775  0.630    0.25  2.7795     12             4
165    M   0.725  0.570    0.19  2.5500     14             4
165    M   0.725  0.570    0.19  2.5500     14             4


Relatedly, a groupby object also has .groups and a group-getter, .get_group():

This can help you be a little more confident that the operation you’re performing is the one you want:

 >>> >>>  groupedgrouped [[ 'height''height' , , 'weight''weight' ]] .. aggagg ([([ 'mean''mean' , , 'median''median' ])
])
height         weight
height         weight
mean median    mean  median
mean median    mean  median
ring_quartile
ring_quartile
1              0.1066  0.105  0.4324  0.3685
1              0.1066  0.105  0.4324  0.3685
2              0.1427  0.145  0.8520  0.8440
2              0.1427  0.145  0.8520  0.8440
3              0.1572  0.155  1.0669  1.0645
3              0.1572  0.155  1.0669  1.0645
4              0.1648  0.165  1.1149  1.0655
4              0.1648  0.165  1.1149  1.0655


No matter what calculation you perform on grouped, be it a single Pandas method or custom-built function, each of these “sub-frames” is passed one-by-one as an argument to that callable. This is where the term “split-apply-combine” comes from: break the data up by groups, perform a per-group calculation, and recombine in some aggregated fashion.

If you’re having trouble visualizing exactly what the groups will actually look like, simply iterating over them and printing a few can be tremendously useful.

## 7.使用此映射技巧进行会员分组(7. Use This Mapping Trick for Membership Binning)

Let’s say that you have a Series and a corresponding “mapping table” where each value belongs to a multi-member group, or to no groups at all:

In other words, you need to map countries to the following result:

 0    North America
0    North America
1    North America
1    North America
2    North America
2    North America
3           Europe
3           Europe
4           Europe
4           Europe
5            other
5            other
dtype: object
dtype: object


What you need here is a function similar to Pandas’ pd.cut(), but for binning based on categorical membership. You can use pd.Series.map(), which you already saw in example #5, to mimic this:

This should be significantly faster than a nested Python loop through groups for each country in countries.

Here’s a test drive:

 >>> >>>  membership_mapmembership_map (( countriescountries , , groupsgroups , , fillvaluefillvalue == 'other''other' )
)
0    North America
0    North America
1    North America
1    North America
2    North America
2    North America
3           Europe
3           Europe
4           Europe
4           Europe
5            other
5            other
dtype: object
dtype: object


Let’s break down what’s going on here. (Sidenote: this is a great place to step into a function’s scope with Python’s debugger, pdb, to inspect what variables are local to the function.)

The objective is to map each group in groups to an integer. However, Series.map() will not recognize 'ab'—it needs the broken-out version with each character from each group mapped to an integer. This is what the dictionary comprehension is doing:

This dictionary can be passed to s.map() to map or “translate” its values to their corresponding group indices.

## 8.了解熊猫如何使用布尔运算符(8. Understand How Pandas Uses Boolean Operators)

You may be familiar with Python’s operator precedence, where and, not, and or have lower precedence than arithmetic operators such as <, <=, >, >=, !=, and ==. Consider the two statements below, where < and > have higher precedence than the and operator:

 >>> >>>  # Evaluates to "False and True"
# Evaluates to "False and True"
>>> >>>  4 4 < < 3 3 and and 5 5 > > 4
4
False

False

>>> >>>  # Evaluates to 4 < 5 > 4
# Evaluates to 4 < 5 > 4
>>> >>>  4 4 < < (( 3 3 and and 55 ) ) > > 4
4
True
True


Note: It’s not specifically Pandas-related, but 3 and 5 evaluates to 5 because of short-circuit evaluation:

“The return value of a short-circuit operator is the last evaluated argument.” (Source)

“短路运算符的返回值是最后计算的参数。” （资源）

Pandas (and NumPy, on which Pandas is built) does not use and, or, or not. Instead, it uses &, |, and ~, respectively, which are normal, bona fide Python bitwise operators.

These operators are not “invented” by Pandas. Rather, &, |, and ~ are valid Python built-in operators that have higher (rather than lower) precedence than arithmetic operators. (Pandas overrides dunder methods like .__ror__() that map to the | operator.) To sacrifice some detail, you can think of “bitwise” as “elementwise” as it relates to Pandas and NumPy:

It pays to understand this concept in full. Let’s say that you have a range-like Series:

 >>> >>>  s s = = pdpd .. SeriesSeries (( rangerange (( 1010 ))
))


I would guess that you may have seen this exception raised at some point:

What’s happening here? It’s helpful to incrementally bind the expression with parentheses, spelling out how Python expands this expression step by step:

 s s % % 2 2 == == 0 0 & & s s > > 3                      3                      # Same as above, original expression
# Same as above, original expression
(( s s % % 22 ) ) == == 0 0 & & s s > > 3                    3                    # Modulo is most tightly binding here
# Modulo is most tightly binding here
(( s s % % 22 ) ) == == (( 0 0 & & ss ) ) > > 3                  3                  # Bitwise-and is second-most-binding
# Bitwise-and is second-most-binding
(( s s % % 22 ) ) == == (( 0 0 & & ss ) ) and and (( 0 0 & & ss ) ) > > 3      3      # Expand the statement
# Expand the statement
(((( s s % % 22 ) ) == == (( 0 0 & & ss )) )) and and (((( 0 0 & & ss ) ) > > 33 )  )  # The and operator is least-binding
# The and operator is least-binding


The expression s % 2 == 0 & s > 3 is equivalent to (or gets treated as) ((s % 2) == (0 & s)) and ((0 & s) > 3). This is called expansion: x < y <= z is equivalent to x < y and y <= z.

Okay, now stop there, and let’s bring this back to Pandas-speak. You have two Pandas Series that we’ll call left and right:

You know that a statement of the form left and right is truth-value testing both left and right, as in the following:

 >>> >>>  boolbool (( leftleft ) ) and and boolbool (( rightright )
)


The problem is that Pandas developers intentionally don’t establish a truth-value (truthiness) for an entire Series. Is a Series True or False? Who knows? The result is ambiguous:

The only comparison that makes sense is an elementwise comparison. That’s why, if an arithmetic operator is involved, you’ll need parentheses:

 >>> >>>  (( s s % % 2 2 == == 00 ) ) & & (( s s > > 33 )
)
0    False
0    False
1    False
1    False
2    False
2    False
3    False
3    False
4     True
4     True
5    False
5    False
6     True
6     True
7    False
7    False
8     True
8     True
9    False
9    False
dtype: bool
dtype: bool


In short, if you see the ValueError above pop up with boolean indexing, the first thing you should probably look to do is sprinkle in some needed parentheses.

## 9.从剪贴板加载数据(9. Load Data From the Clipboard)

It’s a common situation to need to transfer data from a place like Excel or Sublime Text to a Pandas data structure. Ideally, you want to do this without going through the intermediate step of saving the data to a file and afterwards reading in the file to Pandas.

You can load in DataFrames from your computer’s clipboard data buffer with pd.read_clipboard(). Its keyword arguments are passed on to pd.read_table().

This allows you to copy structured text directly to a DataFrame or Series. In Excel, the data would look something like this:

Its plain-text representation (for example, in a text editor) would look like this:

Simply highlight and copy the plain text above, and call pd.read_clipboard():

 >>> >>>  df df = = pdpd .. read_clipboardread_clipboard (( na_valuesna_values == [[ NoneNone ], ], parse_datesparse_dates == [[ 'd''d' ])
])
>>> >>>  df
df
a         b    c          d
a         b    c          d
0  0    1.0000  inf 2000-01-01
0  0    1.0000  inf 2000-01-01
1  2    7.3891  NaN 2013-01-05
1  2    7.3891  NaN 2013-01-05
2  4   54.5982  NaN 2018-07-24
2  4   54.5982  NaN 2018-07-24
3  6  403.4288  NaN        NaT

3  6  403.4288  NaN        NaT

>>> >>>  dfdf .. dtypes
dtypes
a             int64
a             int64
b           float64
b           float64
c           float64
c           float64
d    datetime64[ns]
d    datetime64[ns]
dtype: object
dtype: object


## 10.直接将熊猫对象写入压缩格式(10. Write Pandas Objects Directly to Compressed Format)

This one’s short and sweet to round out the list. As of Pandas version 0.21.0, you can write Pandas objects directly to gzip, bz2, zip, or xz compression, rather than stashing the uncompressed file in memory and converting it. Here’s an example using the abalone data from trick #1:

In this case, the size difference is 11.6x:

 >>> >>>  import import os.path
os.path
>>> >>>  abaloneabalone .. to_jsonto_json (( 'df.json''df.json' , , orientorient == 'records''records' , , lineslines == TrueTrue )
)
>>> >>>  osos .. pathpath .. getsizegetsize (( 'df.json''df.json' ) ) / / osos .. pathpath .. getsizegetsize (( 'df.json.gz''df.json.gz' )
)
11.603035760226396
11.603035760226396


## 要添加到此列表？让我们知道(Want to Add to This List? Let Us Know)

Hopefully, you were able to pick up a couple of useful tricks from this list to lend your Pandas code better readability, versatility, and performance.

If you have something up your sleeve that’s not covered here, please leave a suggestion in the comments or as a GitHub Gist. We will gladly add to this list and give credit where it’s due.

python熊猫图案

• 0
点赞
• 0
收藏
• 0
评论
03-27 770
02-01 6667
04-27 4359
09-20 887
05-25 141

• 非常没帮助
• 没帮助
• 一般
• 有帮助
• 非常有帮助