1、几个概念
1.1 get 和 post
- get 查询参数会在url上面显示出来
- post 它的查询参数和需要提交的数据是隐藏在form表单里面不在在url地址上面显示出来
1.2 URL
- URL统一资源定位符
- URL的组成
- https 网络协议
- 主机名
- 访问资源的路径
- 锚点
- % + 十六进制进行编码
例子:https://new.qq.com/omn/TWF20200/TWF2020032502924000.html
https:网络协议
主机名:new.qq.com
访问资源的路径:omn/TWF20200/TWF2020032502924000.html
锚点(anchor):前端用来做页面定位的
1.3 User-Agent
- User-Agent 用户代理 记录了用户的浏览器、操作系统 为了让用户获得更好的html页面效果 反爬
User-Agent: Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/78.0.3904.108 Safari/537.36
1.4 refer
- refer 表明当前这个请求从哪个Url过来的 反爬
1.5 状态码
- 200 : 请求成功
- 301 : 永久重定向
- 302 : 临时重定向
- 403 : 服务器拒绝请求
- 404 : 请求失败(服务器⽆法根据客户端的请求找到资源(⽹⻚))
- 500 : 服务器内部请求
1.6 抓包工具
- Elements : 元素 ⽹⻚源代码,提取数据和分析数据(有些数据是经过特殊处 理的所以并不是都是准确的)
- Console : 控制台 (打印信息)
- Sources : 信息来源 (整个⽹站加载的⽂件)
- NetWork : ⽹络⼯作(信息抓包) 能够看到很多的⽹⻚请求
2、爬虫请求模块
2.1 urllib.request模块
方式一
import requests # 需要安装第三方requests模块
url = 'https://ss3.bdstatic.com/70cFv8Sh_Q1YnxGkpoWK1HF6hhy/it/u=2534506313,1688529724&fm=26&gp=0.jpg'
req = requests.get(url)
fn = open('code.png','wb')
fn.write(req.content)
fn.close()
with open('code2.png','wb') as f:
f.write(req.content)
方式二
from urllib import request # 调用内置urllib模块,不需要第三方requests模块
url = 'https://ss3.bdstatic.com/70cFv8Sh_Q1YnxGkpoWK1HF6hhy/it/u=2534506313,1688529724&fm=26&gp=0.jpg'
request.urlretrieve(url,'code2.png')
- 版本
- python2 :urllib2、urllib
- python3 :把urllib和urllib2合并,urllib.request
- 常用的方法
- urllib.request.urlopen(“⽹址”) 作⽤ :向⽹站发起⼀个请求并获取响应
- 字节流 = response.read()
- 字符串 = response.read().decode(“utf-8”)
- urllib.request.Request"⽹址",headers=“字典”) urlopen()不⽀持重构 User-Agent
2.2 urllib的使用
- read()把对象中内容读取出来
- decode() bytes数据类型 —> str数据类型
- encode() str数据类型 —> bytes数据类型
- getcode() 返回HTTP的响应码
- geturl() 返回实际数据的URL(防⽌重定向问题)
url = 'https://www.baidu.com/'
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36'
}
# 1 创建了请求的对象Request()
req = urllib.request.Request(url,headers=headers)
# 2 获取响应对象 urlopen()
res = urllib.request.urlopen(req)
# 3 读取响应的内容 read()
html = res.read().decode('utf-8')
# print(html) # 打印内容
# print(res.getcode()) # 返回状态码
print(res.geturl()) # 返回我们实际请求的url
2.3 urllib.parse模块
- urlencode(字典)
- quote(字符串) (这个⾥⾯的参数是个字符串)
import urllib.parse
te = {'wd':'海贼王'}
result = urllib.parse.urlencode(te)
print(result) # 将‘海贼王’编码为二进制 %E6%B5%B7%E8%B4%BC%E7%8E%8B
ex:搜索一个内容 把这个数据保存到本地 html
import urllib.parse
import urllib.request
baseurl = 'https://www.baidu.com/s?'
#
key = input('请输入你要搜索的内容:')
# 进行urlencde()进行编码
w = {'wd':key}
k = urllib.parse.urlencode(w)
# 拼接url
url = baseurl + k
# print(url)
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36','Cookie':'BIDUPSID=23F0C104655E78ACD11DB1E20FA56630; PSTM=1592045183; BD_UPN=12314753; sug=0; sugstore=0; ORIGIN=0; bdime=0; BAIDUID=23F0C104655E78AC9F0FB18960BCA3D3:SL=0:NR=10:FG=1; BDUSS=ldxR1FyQ2FEaVZ5UWFjTDlRbThVZHJUQTY1S09PSU81SXlHaUpubVpEY0FMakZmRVFBQUFBJCQAAAAAAAAAAAEAAADzvSajSjdnaGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAChCV8AoQlfb; BDUSS_BFESS=ldxR1FyQ2FEaVZ5UWFjTDlRbThVZHJUQTY1S09PSU81SXlHaUpubVpEY0FMakZmRVFBQUFBJCQAAAAAAAAAAAEAAADzvSajSjdnaGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAChCV8AoQlfb; MCITY=-158%3A; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_HOME=1; delPer=0; BD_CK_SAM=1; PSINO=6; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; BDRCVFR[tox4WRQ4-Km]=mk3SLVN4HKm; BDRCVFR[-pGxjrCMryR]=mk3SLVN4HKm; BDRCVFR[CLK3Lyfkr9D]=mk3SLVN4HKm; COOKIE_SESSION=204_0_5_9_4_6_0_0_5_4_0_0_533_0_0_0_1602246393_0_1602250500%7C9%2369429_193_1601361993%7C9; H_PS_PSSID=32757_32617_1428_7566_7544_31660_32723_32230_7517_32116_32718; H_PS_645EC=ab4cD3QpA7yZJBKDrrzZqesHzhDrwV%2BYww0WVHtmGJ3Adcj0qvjZIVV%2F9q4'
}
# 创建请求对象
req = urllib.request.Request(url,headers=headers)
# 获取响应对象
res = urllib.request.urlopen(req)
# 读取响应对象
html = res.read().decode('utf-8')
# 写入文件
with open('搜索2.html','w',encoding='utf-8') as f:
f.write(html)
import urllib.parse
import urllib.request
baseurl = 'https://www.baidu.com/s?wd='
key = input('请输入你要搜索的内容:')
k = urllib.parse.quote(key)
url = baseurl + k
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.97 Safari/537.36','Cookie':'BIDUPSID=23F0C104655E78ACD11DB1E20FA56630; PSTM=1592045183; BD_UPN=12314753; sug=0; sugstore=0; ORIGIN=0; bdime=0; BAIDUID=23F0C104655E78AC9F0FB18960BCA3D3:SL=0:NR=10:FG=1; BDUSS=ldxR1FyQ2FEaVZ5UWFjTDlRbThVZHJUQTY1S09PSU81SXlHaUpubVpEY0FMakZmRVFBQUFBJCQAAAAAAAAAAAEAAADzvSajSjdnaGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAChCV8AoQlfb; BDUSS_BFESS=ldxR1FyQ2FEaVZ5UWFjTDlRbThVZHJUQTY1S09PSU81SXlHaUpubVpEY0FMakZmRVFBQUFBJCQAAAAAAAAAAAEAAADzvSajSjdnaGgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAChCV8AoQlfb; MCITY=-158%3A; BDORZ=B490B5EBF6F3CD402E515D22BCDA1598; BD_HOME=1; delPer=0; BD_CK_SAM=1; PSINO=6; BDRCVFR[feWj1Vr5u3D]=I67x6TjHwwYf0; BDRCVFR[tox4WRQ4-Km]=mk3SLVN4HKm; BDRCVFR[-pGxjrCMryR]=mk3SLVN4HKm; BDRCVFR[CLK3Lyfkr9D]=mk3SLVN4HKm; COOKIE_SESSION=204_0_5_9_4_6_0_0_5_4_0_0_533_0_0_0_1602246393_0_1602250500%7C9%2369429_193_1601361993%7C9; H_PS_PSSID=32757_32617_1428_7566_7544_31660_32723_32230_7517_32116_32718; H_PS_645EC=ab4cD3QpA7yZJBKDrrzZqesHzhDrwV%2BYww0WVHtmGJ3Adcj0qvjZIVV%2F9q4'
}
# 创建请求对象
req = urllib.request.Request(url,headers=headers)
# 获取响应对象
res = urllib.request.urlopen(req)
# 读取响应对象
html = res.read().decode('utf-8')
# 写入文件
with open('搜索3.html','w',encoding='utf-8') as f:
f.write(html)