猫眼电影票房爬取到MySQL中_爬取猫眼电影top100,request、beautifulsoup运用

本文介绍了如何使用Python爬虫结合request、BeautifulSoup库抓取猫眼电影Top100的电影信息,并将数据存储到MySQL数据库中。通过分析页面结构,提取电影的排名、名称、演员、上映时间和评分,然后创建数据库表并插入数据。
摘要由CSDN通过智能技术生成

IveiQj.jpg

这是第三篇爬虫实战,运用request请求,beautifulsoup解析,mysql储存。

如果你正在学习爬虫,本文是比较好的选择,建议在学习的时候打开猫眼电影top100进行标签的选择,具体分析步骤就省略啦,具体的方法可看我以前的总结。

练习猫眼top100是为了复习request和beautifulsoup以及sql中表的创建、字段类型的设置。好了附上源代码

import requests

from bs4 import BeautifulSoup

from requests.exceptions import RequestException

headers = {

'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"}

def get_page_index(url):

try:

respones = requests.get(url, headers=headers) # get请求

if respones.status_code == 200:

return respones.text

return None

except RequestException:

print("请求有错误")

return None

def parse_one_page(html):

soup = BeautifulSoup(html, 'lxml')

board_wrapper = soup.find_all('dl', class_='board-wrapper')[0]

board_index = board_wrapper.find_all('dd')

for link in board_index:

title = link.find_all('p', class_='name')[0].text

board_index = link.find_all('i')[0].text

star = link.find_all('p', class_='star')[0].text.strip()[3:]

releasetime = link.find_all('p', class_='releasetime')[0].text.strip()[5:]

score = link.find_all('p', class_='score')[0].text

print(board_index, title, star, releasetime, score)

def main():

t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90]

urls = ['https://maoyan.com/board/4?offset={}'.format(i) for i in t]

for url in urls:

html = get_page_index(url)

parse_one_page(html)

if __name__ == "__main__":

main()

爬取内容如下:

b80a15860aa4c1403cf0642e159fd6d9.png

c6438d67ab15811ce91eeb97138d8014.png

0c8cb1a9e2f96532363342e731ae7ae6.png

24b536a79d843f1928dbd1f1f7594c65.png

存入数据库

存入数据库01:新建表

ALTER TABLE maoyantop100 CONVERT TO CHARACTER SET utf8 COLLATE utf8_unicode_ci;

create table maoyantop100

(board_index int not null,

title varchar(20) not null,

star varchar(50) not null,

releasetime varchar(50) not null,

score float not null);

存入数据库02

import pymysql

import requests

import json

from bs4 import BeautifulSoup

from requests.exceptions import RequestException

headers = {

'user-agent': "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36"}

def get_page_index(url):

try:

respones = requests.get(url, headers=headers) # get请求

if respones.status_code == 200:

return respones.text

return None

except RequestException:

print("请求有错误")

return None

def parse_one_page(html):

soup = BeautifulSoup(html, 'lxml')

board_wrapper = soup.find_all('dl', class_='board-wrapper')[0]

board_index = board_wrapper.find_all('dd')

for link in board_index:

title = link.find_all('p', class_='name')[0].text

board_index = link.find_all('i')[0].text

star = link.find_all('p', class_='star')[0].text.strip()[3:]

releasetime = link.find_all('p', class_='releasetime')[0].text.strip()[5:]

score = link.find_all('p', class_='score')[0].text

return board_index, title, star, releasetime, score

def save_to_mysql(board_index,title,star, releasetime,score):

cur = conn.cursor() #用来获得python执行Mysql命令的方法,操作游标。cur = conn.sursor才是真的执行

insert_data = "INSERT INTO maoyantop100(board_index,title,star, releasetime,score)" "VALUES(%s,%s,%s,%s,%s)"

val = (board_index,title,star, releasetime,score)

cur.execute(insert_data, val)

conn.commit()

conn=pymysql.connect(host="localhost",user="root", password='180****3940',db='58yuesao',port=3306, charset="utf8")

def main():

t = [0, 10, 20, 30, 40, 50, 60, 70, 80, 90]

urls = ['https://maoyan.com/board/4?offset={}'.format(i) for i in t]

for url in urls:

html = get_page_index(url)

board_index, title, star, releasetime, score = parse_one_page(html)

save_to_mysql(board_index, title, star, releasetime, score)

if __name__ == "__main__":

main()

fc5c136a7db505146499f5fd9b4c571c.png 存入数据库

本文存在一个小bug,在存入数据库时很崩溃,只爬取了每页的第一条信息。

我也不知道为什么,没存入数据库时爬取内容是完整的,存入之后就…………解决后会更新到本文。

作者:夜希辰

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值