python处理fasta文件,使用生成器(python)解析fasta文件

I am trying to parse a large fasta file and I am encountering out of memory errors. Some suggestions to improve the data handling would be appreciated. Currently the program correctly prints out the names however partially through the file I get a MemoryError

Here is the generator

def readFastaEntry( fp ):

name = ""

seq = ""

for line in fp:

if line.startswith( ">" ):

tmp = []

tmp.append( name )

tmp.append( seq )

name = line

seq = ""

yield tmp

else:

seq = seq.join( line )

and here is the caller stub more will be added after this part works

fp = open( sys.argv[1], 'r' )

for seq in readFastaEntry( fp ) :

print seq[0]

For those not fimilar with the fasta format here is an example

>1 (PB2)

AATATATTCAATATGGAGAGAATAAAAGAACTAAGAGATCTAATGTCACAGTCTCGCACTCGCGAGATAC

TCACCAAAACCACTGTGGACCACATGGCCATAATCAAAAAGTACACATCAGGAAGGCAAGAGAAGAACCC

TGCACTCAGGATGAAGTGGATGATG

>2 (PB1)

AACCATTTGAATGGATGTCAATCCGACTTTACTTTTCTTGAAAGTTCCAGCGCAAAATGCCATAAGCACC

ACATTTCCCTATACTGGAGACCCTCC

each entry starts with a ">" stating the name etc then the next N lines are data. There is no defined ending of the data other than the next line having a ">" at the beginning.

解决方案

Have you considered using BioPython. They have a sequence reader that can read fasta files. And if you are interested in coding one yourself, you can take a look at BioPython's code.

Edit: Code added

def read_fasta(fp):

name, seq = None, []

for line in fp:

line = line.rstrip()

if line.startswith(">"):

if name: yield (name, ''.join(seq))

name, seq = line, []

else:

seq.append(line)

if name: yield (name, ''.join(seq))

with open('f.fasta') as fp:

for name, seq in read_fasta(fp):

print(name, seq)

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值