本人学习《2020年Python爬虫全套课程(学完可做项目)》记录
连接*:https://www.bilibili.com/video/BV1Yh411o7Sz
requests模板学习
简易网络采集器
if __name__ == "__main__":
headers = {
'user-Agent': ...
}
url = 'https://www.sogou.com/web'
kw = input('输入一个单词');
param = {
'query':kw
}
response = requests.get(url=url,params=param,headers=headers)
page_text = response.text
fileName = kw+'.html'
with open(fileName,'w',encoding='utf-8') as fp:
fp.write(page_text)
print(fileName,'保存成功')
知识点:
- 处理url携带参数
url = ‘https://www.sogou.com/web?query’
?query-将查询的字体保存在param字典中,命名为kw - request.get(url,params,kwargs)
对指定的url发起请求对应的url事携带参数的,并且请求过程中处理了参数
url:网站的url
params:请求时候的query
response保存返回响应文本 - page_text = response.text
获取返回文本并命名为page_text - fileName = kw+’.html’
- with open(fileName,‘w’,encoding=‘utf-8’) as fp:
fp.write(page_text)
文件名为fileName,写入 方式为w,格式为utf-8’,写入page_text - 伪装浏览器
‘user-Agent’:
破解百度翻译(已失效)
1.pose请求(携带参数)
2.响应数据事一组json数据*
- 指定url
找到数据包内的XHR - 进行UA伪装
- post请求参数处理(与get一致)
- 请求发送
- 获得响应数据:jason()直接返回一个对象(确认响应数据是json:content-Type)
if __name__ == "__main__":
headers = {
'user-Agent':'
}
post_url = 'https://fanyi.baidu.com/sug'
url = 'https://www.sogou.com/web'
word = input('输入一个单词')
data={
kw:word
}
fileName = word+'.json'
response = requests.post(url=post_url,data=data,headers=headers)
dic_obj=response.json()
fileName = word+'.json'
fp = open(fileName,'w',encoding='utf-8')
json.dump(dic_obj,fp=fp,ensure_ascii=False)
print('over')
知识点:
- json.dumps将一个Python数据结构转换为JSON:
import json
data = {
'name' : 'myname',
'age' : 100,
}
json_str = json.dumps(data)
- json.loads将一个JSON编码的字符串转换回一个Python数据结构
data = json.loads(json_str)
```python
with open('test.json', 'w') as f:
json.dump(data, f)
with open('test.json', 'r') as f:
data = json.load(f)
豆瓣电影(已失效)
if __name__ == "__main__":
headers = {
'user-Agent': 'Mozilla/5.0 (Linux; Android 5.0; SM-G900P Build/LRX21T) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/87.0.4280.88 Mobile Safari/537.36'
}
url = 'https://movie.douban.com/typerank'
param = {
'type':'24',
'interval_id':'100:90',
'action':'',
'start':'60',
'limit':'20',
}
response = requests.get(url=url,params=param,headers=headers)
list_data = response.json()
fp = open('./douban.json','w',encoding='utf-8')
json.dump(list_data,fp=fp,ensure_ascii=False)
print('over')