python爬取数据--存储mysql数据库

一.安装mysql:

在官网:https://dev.mysql.com/downloads/mysql/

二.安装驱动程序:

   在python集成开发环境Anaconda下,需用命令:pip3 install pymysql或conda install pymysql安装。

三.连接数据库:

#建立mysql数据库连接
import pymysql
conn = pymysql.connect(host='localhost', user='root', password='admin',
                       db='spider', charset='utf8')
# 获取游标(指定获取的数据格式,这里设定返回dict格式)
cursor = conn.cursor()

四.获取并插入数据:

#mysql 插入语句(将title和boby插入cnblogs表中)
            sql = 'insert into cnblogs value (%s,%s)'
            parm = (title, body)
           #execute(sql,args)args一般是list或tuple格式,如果只有一个参数,可直接传入  execute方法中sql语句占位符是%s
            cursor.execute(sql, parm)
            #提交数据 conn.commit()
            conn.commit()

五.具体实例:

5.1  (爬取博客园储存于mysql数据库)

from lxml import etree
import requests
import pandas as pd
#建立mysql数据库连接
import pymysql
conn = pymysql.connect(host='localhost', user='root', password='admin',
                       db='spider', charset='utf8')
# 获取游标(指定获取的数据格式,这里设定返回dict格式)
cursor = conn.cursor()
#爬取URL
recommed_url = 'https://www.cnblogs.com/aggsite/UserStats'
res = requests.get(url=recommed_url).content.decode('utf-8')

ele = etree.HTML(res)
elements = ele.xpath("//*[@id='blogger_list']//a/@href")
url_list = ['http:' + ele for ele in elements][:-2]
for url in url_list:
    while True:
        print(url)
        res2 = requests.get(url).content.decode('utf-8')
        ele2 = etree.HTML(res2)
        word_urls = ele2.xpath('//*[@id="mainContent"]/div/div/div[2]/a/@href')
        for wordUrl in word_urls:
            res3 = requests.get(wordUrl).content.decode('utf-8')
            ele3 = etree.HTML(res3)
            title = ele3.xpath('//*[@id="cb_post_title_url"]/text()')[0]
            body = etree.tostring(ele3.xpath('//*[@id="cnblogs_post_body"]')[0], encoding='utf-8').decode('utf-8')
            body = body[:10]
            #mysql 插入语句(将title和boby插入cnblogs表中)
            sql = 'insert into cnblogs value (%s,%s)'
            parm = (title, body)
           #execute(sql,args)args一般是list或tuple格式,如果只有一个参数,可直接传入  execute方法中sql语句占位符是%s
            cursor.execute(sql, parm)
            #提交数据 conn.commit()
            conn.commit()

        next_page = ele2.xpath("//*[@id='pager']/a|//*[@id='nav_next_page']/a/@href")
        if next_page:
            url = next_page[0]
        else:
            break
    break

5.2 爬取菜鸟教程python100例子储存于mysql数据库

from lxml import etree
import requests#导入请求库

import pymysql
conn = pymysql.connect(host='localhost', user='root', password='admin',
                       db='spider', charset='utf8')
# 获取游标(指定获取的数据格式,这里设定返回dict格式)
cursor = conn.cursor()
#菜鸟教程python100例url
recommed_url='https://www.runoob.com/python3/python3-examples.html'
#利用requests的get()方法请求url 并利用decode()方法以'utf-8'解码
res=requests.get(url=recommed_url).content.decode('utf-8')
#利用etree库解析返回的HTML页面
ele=etree.HTML(res)
#利用Xpath()提取详情页的URl
elements=ele.xpath('//*[@id="content"]/ul/li/a/@href')
#利用列表存储所有的URL
url_list=['https://www.runoob.com/python3/'+ele for ele in elements]
url = url_list
#print()输出语句查看解析的详情页url
# print(url_list)
for url in url_list:
    print(url)
    res2 = requests.get(url).content.decode('utf-8')
    ele2=etree.HTML(res2)
    title = ele2.xpath('//*[@id="content"]/h1/text()')[0]
    body  = ele2.xpath('//*[@id="content"]/p[2]/text()')[0]

    # mysql 插入语句(将title和boby插入cnblogs表中)
    sql = 'insert into cainiao value (%s,%s)'
    parm = (title, body)
    # execute(sql,args)args一般是list或tuple格式,如果只有一个参数,可直接传入  execute方法中sql语句占位符是%s
    cursor.execute(sql, parm)
    # 提交数据 conn.commit()
    conn.commit()


 

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值