【利用python进行数据分析】学习笔记-第6章 数据加载、存储与文件格式——读写文本格式的数据

第6章 数据加载、存储与文件格式

输入输出分类:

  • 读取文本文件和其他更高效的磁盘存储格式
  • 加载数据库中的数据
  • 利用Web API操作网络资源

6.1 读写文本格式的数据

  • pandas中的解析函数

    函数说明
    read_csv从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为逗号
    read_table从文件、URL、文件型对象中加载带分隔符的数据。默认分隔符为制表符("t’)
    read_fwf读取定宽列格式数据(也就是说,没有分隔符)
    read_clipboard读取剪贴板中的数据,可以看做read_table的剪贴板版。再将网页转换为表格时很有用
    read_excel从Excel XLS 或XLSX file读取表格数据
    read_hdf读取pandas写的HDFS文件
    read_html读取HTML文档中的所有表格
    read_json读取JSON(JavaScript Object Notation)字符串中的数据
    read_msgpack二进制格式编码的pandas 数据
    read_pickle读取 Python pickle格式中存储的任意对象
    read_sas读取存储于SAS系统自定义存储格式的SAS 数据集
    read_sql(使用SQLAlchemy)读取SQL查询结果为pandas的 DataFrame
    read_stata读取 Stata文件格式的数据集
    read_feather读取Feather 二进制文件格式
  • 函数的选项

    • 索引:将⼀个或多个列当做返回的DataFrame处理,以及是否从文件、用户获取列名。
    • 类型推断和数据转换:包括用户定义值的转换、和自定义的缺失值标记列表等。
    • 日期解析:包括组合功能,比如将分散在多个列中的日期时间信息组合成结果中的单个列。
    • 迭代:支持对大文件进行逐块迭代。
    • 不规整数据问题:跳过⼀些行、页脚、注释或其他⼀些不重要的东西(比如由成千上万个逗号隔开的数值数据)。
# ⽂件以逗号分隔,可以使⽤read_csv将其读⼊⼀个DataFrame
In [9]: df = pd.read_csv('examples/ex1.csv')
In [10]: df
Out[10]:
   a   b   c   d   message
0  1   2   3   4     hello
1  5   6   7   8     world
2  9  10  11  12       foo

# 可以使⽤read_table,并指定分隔符
In [11]: pd.read_table('examples/ex1.csv', sep=',')
Out[11]:
   a   b   c   d  message
0  1   2   3   4    hello
1  5   6   7   8    world
2  9  10  11  12      foo

# 如果文件没有标题行
In [13]: pd.read_csv('examples/ex2.csv', header=None)
Out[13]:
   0   1   2   3      4
0  1   2   3   4  hello
1  5   6   7   8  world
2  9  10  11  12    foo

In [14]: pd.read_csv('examples/ex2.csv', names=['a', 'b', 'c', 'd'])
Out[14]:
   a   b   c   d  message
0  1   2   3   4    hello
1  5   6   7   8    world
2  9  10  11  12      foo

# 假设你希望将message列做成DataFrame的索引。你可以明确表示要将该列放到索引4的位置上,也可以通过index_col参数指定"message"
In [15]: names = ['a', 'b', 'c', 'd', 'message']

In [16]: pd.read_csv('examples/ex2.csv', names=names, index_col='message')
Out[16]:
         a   b   c   d
message
hello    1   2   3   4
world    5   6   7   8
foo      9  10  11  12

# 如果希望将多个列做成⼀个层次化索引,只需传⼊由列编号或列名组成的列表即可
In [17]: !cat examples/csv_mindex.csv
key1,key2,value1,value2
one,a,1,2
one,b,3,4
one,c,5,6
one,d,7,8
two,a,9,10
two,b,11,12
two,c,13,14
two,d,15,16

In [18]: parsed = pd.read_csv('examples/csv_mindex.csv',
   ....:                      index_col=['key1', 'key2'])

In [19]: parsed
Out[19]:
            value1  value2
key1  key2
one      a       1       2
         b       3       4
         c       5       6
         d       7       8
two      a       9      10
         b      11      12
         c      13      14
         d      15      16
            
# 有些表格可能不是⽤固定的分隔符去分隔字段的
In [20]: list(open('examples/ex3.txt'))
Out[20]:
['             A          B          C\n',
 'aaa  -0.264438  -1.026059  -0.619500\n',
 'bbb   0.927272   0.302904  -0.032399\n',
 'ccc  -0.264273  -0.386314  -0.217601\n',
 'ddd  -0.871858  -0.348382   1.100491\n']

# 可以传递⼀个正则表达式作为read_table的分隔符
In [21]: result = pd.read_table('examples/ex3.txt', sep='\s+')
In [22]: result
Out[22]:
             A          B          C
aaa  -0.264438  -1.026059  -0.619500
bbb   0.927272   0.302904  -0.032399
ccc  -0.264273  -0.386314  -0.217601
ddd  -0.871858  -0.348382   1.100491

# 由于列名⽐数据⾏的数量少,所以read_table推断第⼀列应该是DataFrame的索引
# 可以⽤skiprows跳过⽂件的第⼀⾏、第三⾏和第四⾏
In [23]: !cat examples/ex4.csv
# hey!
a,b,c,d,message
# just wanted to make things more difficult for you
# who reads CSV files with computers, anyway?
1,2,3,4,hello
5,6,7,8,world
9,10,11,12,foo

In [24]: pd.read_csv('examples/ex4.csv', skiprows=[0, 2, 3])
Out[24]:
   a   b   c   d  message
0  1   2   3   4    hello
1  5   6   7   8    world
2  9  10  11  12      foo

# pandas会⽤⼀组经常出现的标记值进⾏识别,⽐如NA及NULL
In [25]: !cat examples/ex5.csv
something,a,b,c,d,message
one,1,2,3,4,NA
two,5,6,,8,world
three,9,10,11,12,foo

In [26]: result = pd.read_csv('examples/ex5.csv')
    
In [27]: result
Out[27]:
   something  a   b     c   d  message
0        one  1   2   3.0   4      NaN
1        two  5   6   NaN   8    world
2      three  9  10  11.0  12      foo

In [28]: pd.isnull(result)
Out[28]:
   something      a      b      c      d  message
0      False  False  False  False  False     True
1      False  False  False   True  False    False
2      False  False  False  False  False    False

# na_values可以⽤⼀个列表或集合的字符串表示缺失值
In [29]: result = pd.read_csv('examples/ex5.csv', na_values=['NULL'])
                                                
In [30]: result
Out[30]:
   something  a   b     c   d  message
0        one  1   2   3.0   4      NaN
1        two  5   6   NaN   8    world
2      three  9  10  11.0  12      foo
                                                             
# 字典的各列可以使⽤不同的NA标记值
In [31]: sentinels = {'message': ['foo', 'NA'], 'something': ['two']}
                                                           
In [32]: pd.read_csv('examples/ex5.csv', na_values=sentinels)
Out[32]:
   someting  a   b     c   d  message
0        one  1   2   3.0   4      NaN
1        NaN  5   6   NaN   8    world
2      three  9  10  11.0  12      NaN

6.1.1 逐块读取文本文件

# 在看大文件之前,先设置pandas显示地更紧些
In [33]: pd.options.display.max_rows = 10
In [34]: result = pd.read_csv('examples/ex6.csv')
In [35]: result
Out[35]:
            one        two      three       four  key
0      0.467976  -0.038649  -0.295344  -1.824726    L
1     -0.358893   1.404453   0.704965  -0.200638    B
2     -0.501840   0.659254  -0.421691  -0.057688    G
3      0.204886   1.074134   1.388361  -0.982404    R
4      0.354628  -0.133116   0.283763  -0.837063    Q
...         ...        ...        ...        ...   ..
9995   2.311896  -0.417070  -1.409599  -0.515821    L
9996  -0.479893  -0.650419   0.745152  -0.646038    E
9997   0.523331   0.787112   0.486066   1.093156    K
9998  -0.362559   0.598894  -1.843201   0.887292    G
9999  -0.096376  -1.012999  -0.657431  -0.573315    0
[10000 rows x 5 columns]
If you want to only read a small

# 如果只想读取⼏⾏(避免读取整个⽂件),通过nrows进⾏指定即可
In [36]: pd.read_csv('examples/ex6.csv', nrows=5)
Out[36]:
         one        two      three       four  key
0   0.467976  -0.038649  -0.295344  -1.824726    L
1  -0.358893   1.404453   0.704965  -0.200638    B
2  -0.501840   0.659254  -0.421691  -0.057688    G
3   0.204886   1.074134   1.388361  -0.982404    R
4   0.354628  -0.133116   0.283763  -0.837063    Q

# 要逐块读取⽂件,可以指定chunksize(⾏数)
In [874]: chunker = pd.read_csv('ch06/ex6.csv', chunksize=1000)
    
In [875]: chunker
Out[875]: <pandas.io.parsers.TextParser at 0x8398150>

6.1.2 将数据写出到文本格式

# 数据也可以被输出为分隔符格式的⽂本
In [41]: data = pd.read_csv('examples/ex5.csv')

In [42]: data
Out[42]:
   something  a   b     c   d  message
0        one  1   2   3.0   4      NaN
1        two  5   6   NaN   8    world
2      three  9  10  11.0  12      foo

In [43]: data.to_csv('examples/out.csv')

In [44]: !cat examples/out.csv
,something,a,b,c,d,message
0,one,1,2,3.0,4,
1,two,5,6,,8,world
2,three,9,10,11.0,12,foo

# 还可以使⽤其他分隔符
In [45]: import sys
    
In [46]: data.to_csv(sys.stdout, sep='|')
|something|a|b|c|d|message
0|one|1|2|3.0|4|
1|two|5|6||8|world
2|three|9|10|11.0|12|foo

# 缺失值在输出结果中会被表示为空字符串。也可以将其表示为别的标记值
In [47]: data.to_csv(sys.stdout, na_rep='NULL')
,something,a,b,c,d,message
0,one,1,2,3.0,4,NULL
1,two,5,6,NULL,8,world
2,three,9,10,11.0,12,foo

# ⾏和列的标签。当然,它们也都可以被禁⽤
In [48]: data.to_csv(sys.stdout, index=False, header=False)
one,1,2,3.0,4,
two,5,6,,8,world
three,9,10,11.0,12,foo

# 可以只写出⼀部分的列,并以你指定的顺序排列
In [49]: data.to_csv(sys.stdout, index=False, columns=['a', 'b', 'c'])
a,b,c
1,2,3.0
5,6,
9,10,11.0

# Series也有⼀个to_csv⽅法
In [50]: dates = pd.date_range('1/1/2000', periods=7)
    
In [51]: ts = pd.Series(np.arange(7), index=dates)
    
In [52]: ts.to_csv('examples/tseries.csv')
    
In [53]: !cat examples/tseries.csv
2000-01-01,0
2000-01-02,1
2000-01-03,2
2000-01-04,3
2000-01-05,4
2000-01-06,5
2000-01-07,6

6.1.3 处理分隔符格式

# 对于任何单字符分隔符⽂件,可以直接使⽤Python内置的csv模块
import csv
f = open('examples/ex7.csv')
reader = csv.reader(f)

# 对这个reader进⾏迭代将会为每⾏产⽣⼀个元组
In [56]: for line in reader:
   ....: print(line)
['a', 'b', 'c']
['1', '2', '3']
['1', '2', '3']

# 读取⽂件到⼀个多⾏的列表中
In [57]: with open('examples/ex7.csv') as f:
   ....:     lines = list(csv.reader(f))
    
# 将⾏分为标题⾏和数据⾏
In [58]: header, values = lines[0], lines[1:]
    
# 用字典构造式和zip将行转置为列
In [59]: data_dict = {h: v for h, v in zip(header, zip(*values))}
    
In [60]: data_dict
Out[60]: {'a': ('1', '1'), 'b': ('2', '2'), 'c': ('3', '3')}
    
# 定义csv.Dialect的⼀个⼦类
class my_dialect(csv.Dialect):
    lineterminator = '\n'
    delimiter = ';'
    quotechar = '"'
    quoting = csv.QUOTE_MINIMAL
reader = csv.reader(f, dialect=my_dialect)

6.1.4 JSON数据

JSON已经成为通过HTTP请求在Web浏览器和其他应⽤程序之间发送数据的标准格式之⼀

obj = """
{"name": "Wes",
 "places_lived": ["United States", "Spain", "Germany"],
 "pet": null,
 "siblings": [{"name": "Scott", "age": 30, "pets": ["Zeus", "Zuko"]},
              {"name": "Katie", "age": 38,
               "pets": ["Sixes", "Stache", "Cisco"]}]
}
"""
In [62]: import json

In [63]: result = json.loads(obj)

In [64]: result
Out[64]:
{'name': 'Wes',
 'pet': None,
 'places_lived': ['United States', 'Spain', 'Germany'],
 'siblings': [{'age': 30, 'name': 'Scott', 'pets': ['Zeus', 'Zuko']},
{'age': 38, 'name': 'Katie', 'pets': ['Sixes', 'Stache', 'Cisco']}]}

# json.dumps则将Python对象转换成JSON格式
In [65]: asjson = json.dumps(result)
    
In [66]: siblings = pd.DataFrame(result['siblings'], columns=['name', 'age'])
                                                              
In [67]: siblings
Out[67]:
    name  age
0  Scott   30
1  Katie   38

# pandas.read_json可以⾃动将特别格式的JSON数据集转换为Series或DataFrame
In [68]: !cat examples/example.json
[{"a": 1, "b": 2, "c": 3},
 {"a": 4, "b": 5, "c": 6},
 {"a": 7, "b": 8, "c": 9}]

In [69]: data = pd.read_json('examples/example.json')
    
In [70]: data
Out[70]:
   a  b  c
0  1  2  3
1  4  5  6
2  7  8  9

# 如果需要将数据从pandas输出到JSON,可以使⽤to_json⽅法                                   
In [72]: print(data.to_json(orient='records'))
[{"a":1,"b":2,"c":3},{"a":4,"b":5,"c":6},{"a":7,"b":8,"c":9}]

6.1.5 XML和HTML:Web信息收集

# pandas.read_html有⼀些选项,默认条件下,它会搜索、尝试解析<table>标签内的的表格数据。结果是⼀个列表的DataFrame对象
In [73]: tables = pd.read_html('examples/fdic_failed_bank_list.
                              
In [74]: len(tables)
Out[74]: 1
                               
In [75]: failures = tables[0]
                               
In [76]: failures.head()
Out[76]:
                      Bank Name             City  ST   CERT\
0                   Allied Bank         Mulberry  AR     91
1  The Woodbury Banking Company         Woodbury  GA  11297
2        First CornerStone Bank  King of Prussia  PA  35312
3            Trust Company Bank          Memphis  TN   9956
4    North Milwaukee State Bank        Milwaukee  WI  20364
                 Acquiring Institution                 Closing Date 
0                         Today's Bank  September 23, 2016 November 
1                          United Bank     August 19, 2016 November 
2  First-Citizens Bank & Trust Company         May 6, 2016 September 
3           The Bank of Fayette County      April 29, 2016 September 
4  First-Citizens Bank & Trust Company      March 11, 2016
                               
# 数据清洗和分析
In [77]: close_timestamps = pd.to_datetime(failures['Closing Date'])
In [78]: close_timestamps.dt.year.value_counts()
Out[78]:
2010  157
2009  140
2011   92
2012   51
2008   25
     ...
2004    4
2001    4
2007    3
2003    3
2000    2
Name: Closing Date, Length: 15, dtype: int64

6.1.6 利用lxml.objectify解析XML

# XML⽂件中的运⾏情况数据
<INDICATOR>
  <INDICATOR_SEQ>373889</INDICATOR_SEQ>
  <PARENT_SEQ></PARENT_SEQ>
  <AGENCY_NAME>Metro-North Railroad</AGENCY_NAME>
  <INDICATOR_NAME>Escalator Availability</INDICATOR_NAME>
  <DESCRIPTION>Percent of the time that escalators are operational
  systemwide. The availability rate is based on physical observations
  the morning of regular business days only. This is a new indicator began reporting in 2009.</DESCRIPTION>
  <PERIOD_YEAR>2011</PERIOD_YEAR>
  <PERIOD_MONTH>12</PERIOD_MONTH>
  <CATEGORY>Service Indicators</CATEGORY>
  <FREQUENCY>M</FREQUENCY>
  <DESIRED_CHANGE>U</DESIRED_CHANGE>   <INDICATOR_UNIT>%</INDICATOR_UNIT>
  <DECIMAL_PLACES>1</DECIMAL_PLACES>
  <YTD_TARGET>97.00</YTD_TARGET>
  <YTD_ACTUAL></YTD_ACTUAL>
  <MONTHLY_TARGET>97.00</MONTHLY_TARGET>
  <MONTHLY_ACTUAL></MONTHLY_ACTUAL>
</INDICATOR>

# 先⽤lxml.objectify解析该⽂件,然后通过getroot得到该XML⽂件的根节点的引⽤
from lxml import objectify

path = 'examples/mta_perf/Performance_MNR.xml'
parsed = objectify.parse(open(path))
root = parsed.getroot()

# 对于每条记录,可以⽤标记名和数据值填充⼀个字典
data = []

skip_fields = ['PARENT_SEQ', 'INDICATOR_SEQ',
               'DESIRED_CHANGE', 'DECIMAL_PLACES']

for elt in root.INDICATOR:
    el_data = {}
    for child in elt.getchildren():
        if child.tag in skip_fields:
            continue
        el_data[child.tag] = child.pyval
     data.append(el_data)
    
# 将这组字典转换为⼀个DataFrame
In [81]: perf = pd.DataFrame(data)
    
In [82]: perf.head()
Out[82]:
Empty DataFrame
Columns: []
Index: []
    
# XML数据可以⽐本例复杂得多。每个标记都可以有元数据
from io import StringIO
tag = '<a href="http://www.google.com">Google</a>'
root = objectify.parse(StringIO(tag)).getroot()

# 可以访问标签或链接⽂本中的任何字段
In [84]: root
Out[84]: <Element a at 0x7f6b15817748>
    
In [85]: root.get('href')
Out[85]: 'http://www.google.com'
    
In [86]: root.text
Out[86]: 'Google'
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

From Star.

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值