python 爬虫抓取亚马逊数据

可以参考:

亚马逊是如何反爬虫的?

如果我们在请求时带上一个变化着的header,成功率就能大大的增加了

headers = randHeader()
rHtml = requests.get(url, headers=headers, timeout=20)
...

之后可以使用PyQuery 来进行html解析抓取元素

 1 #生成随机头
 2 def randHeader():
 3     head_connection = ['Keep-Alive', 'close']
 4     head_accept = ['text/html, application/xhtml+xml, */*']
 5     head_accept_language = ['zh-CN,fr-FR;q=0.5', 'en-US,en;q=0.8,zh-Hans-CN;q=0.5,zh-Hans;q=0.3']
 6     head_user_agent = ['Mozilla/5.0 (Windows NT 6.3; WOW64; Trident/7.0; rv:11.0) like Gecko',
 7                        'Mozilla/5.0 (Windows NT 5.1) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/28.0.1500.95 Safari/537.36',
 8                        'Mozilla/5.0 (Windows NT 6.1; WOW64; Trident/7.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; .NET4.0C; rv:11.0) like Gecko)',
 9                        'Mozilla/5.0 (Windows; U; Windows NT 5.2) Gecko/2008070208 Firefox/3.0.1',
10                        'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070309 Firefox/2.0.0.3',
11                        'Mozilla/5.0 (Windows; U; Windows NT 5.1) Gecko/20070803 Firefox/1.5.0.12',
12                        'Opera/9.27 (Windows NT 5.2; U; zh-cn)',
13                        'Mozilla/5.0 (Macintosh; PPC Mac OS X; U; en) Opera 8.0',
14                        'Opera/8.0 (Macintosh; PPC Mac OS X; U; en)',
15                        'Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.8.1.12) Gecko/20080219 Firefox/2.0.0.12 Navigator/9.0.0.6',
16                        'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Win64; x64; Trident/4.0)',
17                        'Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; Trident/4.0)',
18                        'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E)',
19                        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Maxthon/4.0.6.2000 Chrome/26.0.1410.43 Safari/537.1 ',
20                        'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; SLCC2; .NET CLR 2.0.50727; .NET CLR 3.5.30729; .NET CLR 3.0.30729; Media Center PC 6.0; InfoPath.2; .NET4.0C; .NET4.0E; QQBrowser/7.3.9825.400)',
21                        'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0 ',
22                        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.1 (KHTML, like Gecko) Chrome/21.0.1180.92 Safari/537.1 LBBROWSER',
23                        'Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; WOW64; Trident/6.0; BIDUBrowser 2.x)',
24                        'Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/536.11 (KHTML, like Gecko) Chrome/20.0.1132.11 TaoBrowser/3.0 Safari/536.11']
25 
26     header = {
27         'Connection': head_connection[0],
28         'Accept': head_accept[0],
29         'Accept-Language': head_accept_language[1],
30         'User-Agent': head_user_agent[random.randrange(0, len(head_user_agent))]
31     }
32     return header

View Code

  • 1
    点赞
  • 17
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
回答: 使用Python进行爬虫可以实现指定目标网站的数据抓取,并且可以绕过一些简便方法无法解决的问题,比如目标网站的封杀。\[1\]以下是一个使用Python进行爬取马逊网站的示例代码: ```python import urllib.request req = urllib.request.urlopen('https://www.amazon.com') print(req.code) ``` 这段代码使用了Python的urllib库来发送HTTP请求并获取马逊网站的响应代码。\[2\] 另外,如果你对Python爬虫感兴趣,还可以在QQ技术交流群中获取更多资源,比如3000多本Python电子书、Python开发环境安装教程、Python自学视频等。\[3\] #### 引用[.reference_title] - *1* [【Python爬虫】:使用「Requests」+「bs4」写马逊爬虫](https://blog.csdn.net/weixin_33655208/article/details/114446890)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] - *2* *3* [带你一步步破解马逊 淘宝 京东的反爬虫机制!](https://blog.csdn.net/weixin_52994140/article/details/117957969)[target="_blank" data-report-click={"spm":"1018.2226.3001.9630","extra":{"utm_source":"vip_chatgpt_common_search_pc_result","utm_medium":"distribute.pc_search_result.none-task-cask-2~all~insert_cask~default-1-null.142^v91^control_2,239^v3^insert_chatgpt"}} ] [.reference_item] [ .reference_list ]

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值