爬虫过程
- 向目标网站url ,
- 发起请求httprequest,服务器提供响应封装在httpresponse (requests,urllib)
- 响应有二进制,html,json
- 目标提取 html(css,标签,数据,js),二进制(保存 jpg,MP4),json(字符串和python数据类型转换) (beautifulsoup、lxml、pyquery)
- 保存(文件、数据库)
```python
url='https://movie.douban.com/top250'#起始url
import requests
from bs4 import BeautifulSoup
response=requests.get(url)
print(response)```
查看请求头的user-agent
在自己的爬虫中声明user-agent
**
发起请求
**
headers={
'User-Agent':'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Mobile Safari/537.36'
}
response=requests.get(url,headers=headers)
print(response)
**
目标提取
**
html=response.text
分析出所要提取的信息在html中的位置
#beautifulsoup根据html标签进行定位,用parser解释器解析html
mysoup=BeautifulSoup(html,'html.parser')
找到ol
movie_olmysoup.find('ol,class='grid_view')
找到所有的li
movie_list=movie_ol.find_all('li')
在每一个li中找电影的名字
for movie in movie_list:
title=movie.find('span',class_='title').get_text()
print(title)
得分同理
根据每一页规律得到十页的url,爬取十页
url = 'https://movie.douban.com/top250?start=%d&filter=' % (i * 25)
总体代码
import requests
from bs4 import BeautifulSoup
def getdata(url):
headers = {
'User-Agent': 'Mozilla/5.0 (Linux; Android 6.0; Nexus 5 Build/MRA58N) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/84.0.4147.89 Mobile Safari/537.36'
}
response = requests.get(url, headers=headers)
if response.status_code == 200:
print('got the data')
return response.text
else:
print('fail to get the data')
def parserdata(html):
movies = []
mysoup = BeautifulSoup(html, 'html.parser')
movie_ol = mysoup.find('ol', class_='grid_view')
movie_list = movie_ol.find_all_next('li')
for movie_li in movie_list:
title = movie_li.find('span', class_='title').get_text()
score= movie_li.find('span',class_='rating_num').get_text()
print(title,score)
movies.append(title+' '+score)
return movies
def savedata(movie_list):
with open('豆瓣电影top250.txt', 'a') as f:
for movie in movie_list:
f.write(movie + '\n')
print('saved!')
if __name__ == '__main__':
for i in range(10):
url = 'https://movie.douban.com/top250?start=%d&filter=' % (i * 25)
html = getdata(url)
movie_list=parserdata(html)
savedata(movie_list)