我正在改编一个来自http://danielfrg.com/blog/2013/04/01/nba-scraping-data/#disqus_thread的网页抓取程序,将ESPN中的棒球数据抓取到CSV中。但是,当我运行第二段代码来编写csv游戏时,我从下面的代码部分得到的“NoneType”对象没有“find_all”属性错误for index, row in teams.iterrows():
_team, url = row['team'], row['url']
r = requests.get(BASE_URL.format(row['prefix_1'], year, row['prefix_2']))
table = BeautifulSoup(r.text).table
for row in table.find_all("tr")[1:]: # Remove header
columns = row.find_all('td')
try:
_home = True if columns[1].li.text == 'vs' else False
_other_team = columns[1].find_all('a')[1].text
_score = columns[2].a.text.split(' ')[0].split('-')
_won = True if columns[2].span.text == 'W' else False
match_id.append(columns[2].a['href'].split('?id=')[1])
home_team.append(_team if _home else _other_team)
visit_team.append(_team if not _home else _other_team)
d = datetime.strptime(columns[0].text, '%a, %b %d')
dates.append(date(year, d.month, d.day))
我可以发布整个程序,但这是编译器读取错误的代码。
完整的错误文本是Traceback (most recent call last):
File "C:\Python27\Project Files\Game Parser.py", line 23, in
for row in table.find_all("tr")[1:]: # Remove header
AttributeError: 'NoneType' object has no attribute 'find_all'
任何关于如何运行此代码的帮助都将非常感谢。