python 保存csv_python数据存储-- CSV

该博客介绍了CSV文件的写入和读取方法。首先展示了如何使用Python的csv模块以元组和字典形式写入CSV文件,然后通过csv.reader和csv.DictReader进行读取。在读取过程中,使用了namedtuple和字典get方法来方便地访问数据。最后,通过示例演示了从网页中抓取并保存数据到CSV文件的过程。
摘要由CSDN通过智能技术生成

CSV,其文件以纯文本形式存储表格数据(数字和文本),CSV记录简由某种换行符分隔字段间分隔又其他字符,常见逗号或者制表符,

例如:

#coding:utf-8

import csv

headers = ['ID','UserName','Password','Age','Country']

rows = [(1001,"guobao","1382_pass",21,"China"),

(1002,"Mary","Mary_pass",20,"USA"),

(1003,"Jack","Jack_pass",20,"USA"),

]

with open('guguobao.csv','w') as f:

f_csv = csv.writer(f)

f_csv.writerow(headers)

f_csv.writerows(rows)

运行结果:

ID,UserName,Password,Age,Country

1001,guobao,1382_pass,21,China

1002,Mary,Mary_pass,20,USA

1003,Jack,Jack_pass,20,USA

里面的rows列表中数据元组,也可以字典数组,例如:

import csv

headers = ['ID','UserName','Password','Age','Country']

rows = [{'ID':1001,'UserName':"qiye",'Password':"qiye_pass",'Age':24,'Country':"China"},

{'ID':1002,'UserName':"Mary",'Password':"Mary_pass",'Age':20,'Country':"USA"},

{'ID':1003,'UserName':"Jack",'Password':"Jack_pass",'Age':20,'Country':"USA"},

]

with open('qiye.csv','w') as f:

f_csv = csv.DictWriter(f,headers)

f_csv.writeheader()

f_csv.writerows(rows)

接下来是CSV的读取,要取出CSV文件,需要创建reader对象,例如:

import csv

with open('gugobao.csv','r') as f:

f_csv = csv.reader(f)

headers = next(f_csv)

print headers

for row in f_csv:

print row

除了利用row[0]访问ID,row[3]访问age,由于索引访问引起混淆,因此可以考虑使用元组

from collections import namedtuple

import csv

with open('qiye.csv') as f:

f_csv = csv.reader(f)

headings = next(f_csv)

Row = namedtuple('Row', headings)

for r in f_csv:

row = Row(*r)

print row.UserName,row.Password

print row

运行结果:

C:\Python27\python.exe F:/爬虫/5.1.2.py

qiye qiye_pass

Row(ID='1001', UserName='qiye', Password='qiye_pass', Age='24', Country='China')

Mary Mary_pass

Row(ID='1002', UserName='Mary', Password='Mary_pass', Age='20', Country='USA')

Jack Jack_pass

Row(ID='1003', UserName='Jack', Password='Jack_pass', Age='20', Country='USA')

Process finished with exit code 0

可以使用列名如row.UserName和row.Password代替下标访问。除了使用命名分组之外,另外一个解决办法就是读取一个字典序列中,如下:

import csv

with open('qiye.csv') as f:

f_csv = csv.DictReader(f)

for row in f_csv:

print row.get('UserName'),row.get('Password')

运行结果:

import csv

with open('qiye.csv') as f:

f_csv = csv.DictReader(f)

for row in f_csv:

print row.get('UserName'),row.get('Password')

最终使用CSV解析http://seputu.com首页的标题章节和连接

from lxml import etree

import requests

user_agent = 'Mozilla/4.0 (compatible; MSIE 5.5; Windows NT)'

headers={'User-Agent':user_agent}

r = requests.get('http://seputu.com/',headers=headers)

#使用lxml解析网页

html = etree.HTML(r.text)

div_mulus = html.xpath('.//*[@class="mulu"]')#先找到所有的div class=mulu标签

pattern = re.compile(r'\s*\[(.*)\]\s+(.*)')

rows=[]

for div_mulu in div_mulus:

#找到所有的div_h2标签

div_h2 = div_mulu.xpath('./div[@class="mulu-title"]/center/h2/text()')

if len(div_h2)> 0:

h2_title = div_h2[0].encode('utf-8')

a_s = div_mulu.xpath('./div[@class="box"]/ul/li/a')

for a in a_s:

#找到href属性

href=a.xpath('./@href')[0].encode('utf-8')

#找到title属性

box_title = a.xpath('./@title')[0]

pattern = re.compile(r'\s*\[(.*)\]\s+(.*)')

match = pattern.search(box_title)

if match!=None:

date =match.group(1).encode('utf-8')

real_title= match.group(2).encode('utf-8')

# print real_title

content=(h2_title,real_title,href,date)

print content

rows.append(content)

headers = ['title','real_title','href','date']

with open('qiye.csv','w') as f:

f_csv = csv.writer(f,)

f_csv.writerow(headers)

f_csv.writerows(rows)

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值