爬虫框架-数据挖掘
分类:1网络爬虫(公开) 2蠕虫爬虫(病毒)
工作流程:
- 访问(浏览器:客户端) - 模拟器(脚本:恶意访问) - 获取网页数据 - 解析数据 - 过滤数据 - 本地存储(txt,word,Excel,rids,mangdb,mysql…)
爬虫进阶:
-
urllib,re
-
requests,bs4
-
自动化selenuim
-
分布式scrapy
第三方框架安装:
- cmd: pip install 库名
- cmd: pip uninstall 库名
- cmd: pip show 库名
- cmd: pip list
requests请求框架
用于模拟浏览器,发送请求,提交数据,获取返回数据
import requests
print('path',requests)
path <module 'requests' from 'D:\\Users\\18769\\anaconda3\\lib\\site-packages\\requests\\__init__.py'>
一、请求函数
http:
- get
- post
response_get = requests.get("http://httpbin.org/get")
response_post = requests.post("http://httpbin.org/post")
二、提交参数
服务器接口(url),两种请求方式的提交参数
response_get = requests.get("http://httpbin.org/get",params={'user':'qjx'})
response_post = requests.post("http://httpbin.org/post",data={'user':'qjx'})
三、阅读提交路径
请求对象(提交参数) - 服务器 -返回参数(返回信息)
print(response_get.url)
print(response_post.url)
http://httpbin.org/get?user=qjx
http://httpbin.org/post
四、响应状态码和编码格式
服务器返回状态码
- 200 成功
- 403 无法加载
- 500、505 服务器异常
- …
网页编码:
- utf-8
- gbk
- iso
- …
response_get = requests.get("https://www.baidu.com/")
print('code-',response_get.status_code)
print('encoding-',response_get.encoding)
response_post = requests.post("https://www.baidu.com/")
print('code-',response_post.status_code)
print('encoding-',response_post.encoding)
code- 200
encoding- ISO-8859-1
code- 302
encoding- ISO-8859-1
五、响应内容
-
text 字符流
-
content 字节流 二进制
返回数据类型:
-
html 使用解析框架:bs4
-
json 使用json模块
-
xml 使用解析框架:bs4
-
text
-
file文件 使用下载函数
response_get.encoding = "utf-8" # 编码格式:UTF-8
print(response_get.text)
<!DOCTYPE html>
<!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge><meta content=always name=referrer><link rel=stylesheet type=text/css href=https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css><title>百度一下,你就知道</title></head> <body link=#0000cc> <div id=wrapper> <div id=head> <div class=head_wrapper> <div class=s_form> <div class=s_form_wrapper> <div id=lg> <img hidefocus=true src=//www.baidu.com/img/bd_logo1.png width=270 height=129> </div> <form id=form name=f action=//www.baidu.com/s class=fm> <input type=hidden name=bdorz_come value=1> <input type=hidden name=ie value=utf-8> <input type=hidden name=f value=8> <input type=hidden name=rsv_bp value=1> <input type=hidden name=rsv_idx value=1> <input type=hidden name=tn value=baidu><span class="bg s_ipt_wr"><input id=kw name=wd class=s_ipt value maxlength=255 autocomplete=off autofocus=autofocus></span><span class="bg s_btn_wr"><input type=submit id=su value=百度一下 class="bg s_btn" autofocus></span> </form> </div> </div> <div id=u1> <a href=http://news.baidu.com name=tj_trnews class=mnav>新闻</a> <a href=https://www.hao123.com name=tj_trhao123 class=mnav>hao123</a> <a href=http://map.baidu.com name=tj_trmap class=mnav>地图</a> <a href=http://v.baidu.com name=tj_trvideo class=mnav>视频</a> <a href=http://tieba.baidu.com name=tj_trtieba class=mnav>贴吧</a> <noscript> <a href=http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1 name=tj_login class=lb>登录</a> </noscript> <script>document.write('<a href="http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u='+ encodeURIComponent(window.location.href+ (window.location.search === "" ? "?" : "&")+ "bdorz_come=1")+ '" name="tj_login" class="lb">登录</a>');
</script> <a href=//www.baidu.com/more/ name=tj_briicon class=bri style="display: block;">更多产品</a> </div> </div> </div> <div id=ftCon> <div id=ftConw> <p id=lh> <a href=http://home.baidu.com>关于百度</a> <a href=http://ir.baidu.com>About Baidu</a> </p> <p id=cp>©2017 Baidu <a href=http://www.baidu.com/duty/>使用百度前必读</a> <a href=http://jianyi.baidu.com/ class=cp-feedback>意见反馈</a> 京ICP证030173号 <img src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>
response_get = requests.get("http://httpbin.org/get",params={'user':'qjx'})
print(type(response_get.text)) # response_get.text返回的是str
json = response_get.json()
print(type(json)) # response_get.json() 返回的是dict
print(json)
print(json['args'])
print(json['headers'])
print(json['origin'])
print(json['url'])
<class 'str'>
<class 'dict'>
{'args': {'user': 'qjx'}, 'headers': {'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.22.0', 'X-Amzn-Trace-Id': 'Root=1-5f054028-cd2f7f0bdb28b04f165e6743'}, 'origin': '119.189.116.118', 'url': 'http://httpbin.org/get?user=qjx'}
{'user': 'qjx'}
{'Accept': '*/*', 'Accept-Encoding': 'gzip, deflate', 'Host': 'httpbin.org', 'User-Agent': 'python-requests/2.22.0', 'X-Amzn-Trace-Id': 'Root=1-5f054028-cd2f7f0bdb28b04f165e6743'}
119.189.116.118
http://httpbin.org/get?user=qjx
from urllib.request import urlretrieve # 下载网络资源
六、定制头部
服务器客户端验证,验证一些浏览器隐藏提交的参数
浏览器的开发者模式:
network 抓取当前访问的的url,显示访问信息,f5刷新
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36
# 自定义字典格式的headers
head = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36'}
response_get = requests.get('http://www.qianlima.com/zb/area_305',headers=head)
print('code-',response_get.status_code)
code- 200
七、cookie
字典类型的浏览器本地缓存
# 获取cookie
response_get = requests.get(url='http://httpbin.org/cookies')
print("cookies-",response_get.cookies) # cookie是空的
#提交cookie
cookie = {'name':'qjx'}
response_get = requests.get(url='http://httpbin.org/cookies',cookies=cookie)
print("cookies-",response_get.text)
cookies- <RequestsCookieJar[]>
cookies- {
"cookies": {
"name": "qjx"
}
}
八、用户代理IP
你的电脑和另外一台主机连接,它代理你完成一些任务
pro = {
# 静态ip代理
'http://':'37.238.209.227:80',
'https://':'110.232.252.234:8080'
}
head = {'User-Agent':'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.106 Safari/537.36'}
response_get = requests.get('http://www.qianlima.com/zb/area_305',proxies=pro,headers=head)
print('code-',response_get.status_code)
code- 200
九、连接超时处理
访问服务器的连接时间,默认是60s,超出时间抛出异常
try:
response_get = requests.get(url='http://httpbin.org/cookies',timeout=0.05)
except Exception as e:
print(e)
HTTPConnectionPool(host='httpbin.org', port=80): Max retries exceeded with url: /cookies (Caused by ConnectTimeoutError(<urllib3.connection.HTTPConnection object at 0x000002783A6D8308>, 'Connection to httpbin.org timed out. (connect timeout=0.05)'))