python爬取换页_python爬取分页问题

我爬取的思路是先寻找所有网页,然后再请求所有网页,并将他们的内容用beautifulsoup解析出来,最后写进csv文件里面,但是却报错了.这是为什么呢?是我的思路出了问题吗?求各位大神帮助,我的代码如下:

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

22

23

24

25

26

27

28

29

30

31

32

33# -*- coding:utf-8 -*-

import requests

from bs4 import BeautifulSoup

import csv

user_agent = 'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/52.0.2743.116 Safari/537.36'

url = 'http://finance.qq.com'

def get_url(url):

links = []

page_number = 1

while page_number <=36:

link = url+'/c/gdyw_'+str(page_number)+'.htm'

links.append(link)

page_number = page_number + 1

return links

all_link = get_url(url)

def get_data(all_link):

response = requests.get(all_link)

soup = BeautifulSoup(response.text,'lxml')

soup = soup.find('div',{'id':'listZone'}).findAll('a')

return soup

def main():

with open("test.csv", "w") as f:

f.write("url\t titile\n")

for item in get_data(all_link):

f.write("{}\t{}\n".format(url + item.get("href"), item.get_text()))

if __name__ == "__main__":

main()

报错内容:

Traceback (most recent call last):

File “D:/Python34/write_csv.py”, line 33, in

main()

File “D:/Python34/write_csv.py”, line 29, in main

for item in get_data(all_link):

File “D:/Python34/write_csv.py”, line 21, in get_data

response = requests.get(all_link)

File “D:Python34libsite-packagesrequestsapi.py”, line 71, in get

return request(‘get’, url, params=params, **kwargs)

File “D:Python34libsite-packagesrequestsapi.py”, line 57, in request

return session.request(method=method, url=url, **kwargs)

File “D:Python34libsite-packagesrequestssessions.py”, line 475, in request

resp = self.send(prep, **send_kwargs)

File “D:Python34libsite-packagesrequestssessions.py”, line 579, in send

adapter = self.get_adapter(url=request.url)

File “D:Python34libsite-packagesrequestssessions.py”, line 653, in get_adapter

raise InvalidSchema(“No connection adapters were found for ‘%s'” % url)

不能直接

1requests.get

一个list的吧

http://docs.python-requests.o…

url – URL for the new Request object.

应该来个for循环一个个来

update:

我给你改了下程序: 至少Python3可以跑了 Python2试了下unicode问题懒的改了

1

2

3

4

5

6

7def get_data(all_link):

for uri in all_link:

response = requests.get(uri)

soup = BeautifulSoup(response.text,'lxml')

soup = soup.find('div',{'id':'listZone'}).findAll('a')

for small_soup in soup:

yield small_soup

重写这段

是直接报错还是已经处理过一些连接后报的错,你在每处理一个连接后输出一下序号和当前的URL

是你请求的url没有带上http://这样的头吧,打印一下url看看。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值