task05 爬虫入门与综合应用

1、爬虫需要的相关库介绍Requests是一款目前非常流行的http请求库,使用python编写,能非常方便的对网页Requests进行爬取,也是爬虫最常用的发起请求第三方库。安装方法:pip install requests或者conda安装conda install requests介绍requests模块的基本属性功能requests.status_code 响应的HTTP状态码requests.text 响应内容的字符串形式requests.content
摘要由CSDN通过智能技术生成

1、爬虫需要的相关库介绍

Requests是一款目前非常流行的http请求库,使用python编写,能非常方便的对网页Requests进行爬取,也是爬虫最常用的发起请求第三方库。

安装方法:

pip install requests
或者conda安装
conda install requests

介绍requests模块的基本属性功能
  • requests.status_code 响应的HTTP状态码
  • requests.text 响应内容的字符串形式
  • requests.content 响应内容的二进制形式
  • requests.encoding 响应内容的编码

1.1 访问百度

对百度首页进行数据请求

import requests

# 发出http请求
re = requests.get("https://www.baidu.com")

#查看相应状态
print(re.status_code)
200

200就是响应的状态码,表示请求成功

1.2 下载txt文件

用爬虫下载孔乙己的文章,网址是https://apiv3.shanbay.com/codetime/articles/mnvdu

尝试用爬虫保存这篇文章的内容

# 发出http请求
re = requests.get("https://apiv3.shanbay.com/codetime/articles/mnvdu")

# 查看相应状态
print(re.status_code)
200
with open("鲁迅文章.txt",'w') as file:
    # 将数据的字符串形式写入文件中
    print("正在爬取小说")
    file.write(re.text)
正在爬取小说

1.3 下载图片

  • re.text用于文本内容的获取、下载
  • re.content用于图片、视频、音频等内容的获取、下载
(1)PNG图片
re = requests.get("https://img-blog.csdnimg.cn/20210424184053989.PNG")
print(re.status_code)

#以二进制的写入方式打开一个名为info.jpg的文件
with open('datawhale.png','wb') as ff:
    # 将数据的二进制形式写入文件中
    ff.write(re.content)
200
(2)JPG图片
re = requests.get("https://pic4.zhimg.com/4d132bd813f3506557ca768c181648fe_b.jpg")
print(re.status_code)

#以二进制的写入方式打开一个名为info.jpg的文件
with open('zhihu.jpg','wb') as ff:
    # 将数据的二进制形式写入文件中
    ff.write(re.content)
200
(3)GIF动图
re = requests.get("https://bestanimations.com/Music/Instruments/Percussion/Drums/drummer-animated-gif-3.gif")
print(re.status_code)

#以二进制的写入方式打开一个名为info.jpg的文件
with open('火柴.gif','wb') as ff:
    # 将数据的二进制形式写入文件中
    ff.write(re.content)
200

2、HTML解析与提取

首先介绍一下浏览器工作的院里面

向浏览器中输入某个网址,浏览器会向服务器发出请求,然后服务器就会作出响应。其实,服务器返回给浏览器的这个结果就是HTML代码,浏览器会根据这个HTML代码将网页解析成平时我们看到的那样

比如我们看一下百度的html页面

res=requests.get('https://baidu.com')
print(res.text)
<!DOCTYPE html>
<!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge>.......

有很多带有标签的信息,<></>这是一种超文本标记语言

举一个简单的html例子:

<html>
    <head>
        <title>我的网页</title>
    </head>
    <body>
        hello world!
    </body>
</html>

那应该如何解析html页面呢,就需要用到一个库BeautifulSoup

3、BeautifulSoup简介

3.1 安装方法

pip install bs4

conda install bs4

3.2 解析豆瓣读书top250

网址:https://book.douban.com/top250

import io
import sys
import requests
from bs4 import BeautifulSoup
headers = {
  'user-agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_13_6) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/76.0.3809.132 Safari/537.36'
}
res = requests.get('https://book.douban.com/top250', headers=headers)
soup = BeautifulSoup(res.text, 'lxml')
print(soup)
<!DOCTYPE html>
<html class="ua-mac ua-webkit book-new-nav" lang="zh-cmn-Hans">
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"/>
<title>豆瓣读书 Top 250</title>
<script>!function(e){var o=function(o,n,t){var c,i,r=new 
</li>
<li class="">
<a data-moreurl-dict='{"from":"top-nav-click-time","uid":"0"}' href="https://time.douban.com/?dt_time_source=douban-web_top_nav" target="_blank">时间</a>......
(2)基于熟悉的定位查找
# 定位div开头,同时id为'doubanapp-tip'的标签
soup.find('div',id='doubanapp-tip')
<div id="doubanapp-tip">
<a class="tip-link" href="https://www.douban.com/doubanapp/app?channel=qipao">豆瓣 <span class="version">6.0</span> 全新发布</a>
<a class="tip-close" href="javascript: void 0;">×</a>
</div>
# 定位a抬头,同时class为rating_nums的标签
soup.find_all('span',class_='rating_nums')
[<span class="rating_nums">9.6</span>,
 <span class="rating_nums">9.4</span>,
 <span class="rating_nums">9.3</span>,
 <span class="rating_nums">9.4</span>,
 <span class="rating_nums">9.3</span>,
 <span class="rating_nums">9.4</span>,
 <span class="rating_nums">9.3</span>,
 <span class="rating_nums">9.2</span>,
 <span class="rating_nums">9.1</span>,
 <span class="rating_nums">9.2</span>,
 <span class="rating_nums">9.3</span>,
 <span class="rating_nums">9.0</span>,
 <span class="rating_nums">9.1</span>,
 <span class="rating_nums">9.2</span>,
 <span class="rating_nums">9.2</span>,
 <span class="rating_nums">9.0</span>,
 <span class="rating_nums">9.0</span>,
 <span class="rating_nums">8.9</span>,
 <span class="rating_nums">9.1</span>,
 <span class="rating_nums">9.1</span>,
 <span class="rating_nums">9.0</span>,
 <span class="rating_nums">9.3</span>,
 <span class="rating_nums">9.7</span>,
 <span class="rating_nums">9.2</span>,
 <span class="rating_nums">9.1</span>]

4、实践项目1“自如公寓数据抓取

因为自如公寓在价格上增加一定程度的反爬措施,因此自如公寓的价格在本节不讨论

自如公寓官网:https://wh.ziroom.com/z/z/


import requests
from bs4 import BeautifulSoup
import random
import time
import csv

如果只用一个UA头,很容易被反爬虫识别,因此可以做很多个UA头,然后每次访问的时候可以随机选一个

#这里增加了很多user_agent
#能一定程度能保护爬虫
user_agent = [
    "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
    "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
    "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)"]

需要爬虫的信息包括:房屋的名称,房屋的价格,房屋的面积,房屋的朝向,房屋的户型,房屋的位置,房屋的楼层,是否有电梯,房屋的年代,门锁情况,绿化情况

爬取的信息最终保存在csv文件中

  • step1:获取房屋的数字标签

房屋的信息标签都是这个:< a href=“dd//wh.ziroom.com/x/741955798.html” target="_blank"> 房屋名称< /a >

def get_info():
    csvheader=['名称','面积','朝向','户型','位置','楼层','是否有电梯','建成时间',' 门锁','绿化']
    with open('wuhan_ziru.csv', 'a+', newline='') as csvfile:
        writer  = csv.writer(csvfile)
        writer.writerow(csvheader)
        for i in range(1,50):  #总共有50页
            print('正在爬取自如第%s页'%i)
            timelist=[1,2,3]
            print('有点累了,需要休息一下啦(¬㉨¬)')
            time.sleep(random.choice(timelist))   #休息1-3秒,防止给对方服务器过大的压力!!!
            url='https://wh.ziroom.com/z/p%s/'%i
            headers = {'User-Agent': random.choice(user_agent)}
            r = requests.get(url, headers=headers)
            r.encoding = r.apparent_encoding
            soup = BeautifulSoup(r.text, 'lxml')
            all_info = soup.find_all('div', class_='info-box')
            print('开始干活咯(๑>؂<๑)')
            for info in all_info:
                href = info.find('a')
                if href !=None:
                    href='https:'+href['href']
                    try:
                        print('正在爬取%s'%href)
                        house_info=get_house_info(href)
                        writer.writerow(house_info)
                    except:
                        print('出错啦,%s进不去啦( •̥́ ˍ •̀ू )'%href)

通过研究发现了你需要定位的信息 通过标签头 h1 li span 和class的值对标签进行定位

<h1 class="Z_name"><i class="status iconicon_sign"></i>自如友家·电建地产盛世江城·4居室-05</h1>
----
<div class="Z_home_info">
<div class="Z_home_b clearfix">
    <dl class="">
        <dd>8.4</dd>
        <dt>使用面积</dt>
    </dl>
    <dl class="">
        <dd>朝南</dd>
        <dt>朝向</dt>
    </dl>
    <dl class="">
        <dd>41</dd>
        <dt>户型</dt>
    </dl>
</div>
</div>
----
<ul class="Z_home_o">
    <li>
        <span class="la">位置</span><span class="va">
        <span class="ad">小区距2号线长港路站步行约231</span>        
    </li>        
    <li>
        <span class="la">楼层</span><span class="va">6/43</span>
    </li>  
    <li>
        <span class="la">电梯</span><span class="va"></span>
    </li>
    <li>
        <span class="la">年代</span><span class="va">2016年建成</span>
    </li>
    <li>
        <span class="la">门锁</span><span class="va">智能门锁</span>
    </li>
    <li>
        <span class="la">绿化</span><span class="va">35%</span>
    </li>          
</ul>
import requests
from bs4 import BeautifulSoup
import random
import time
import csv

#这里增加了很多user_agent
#能一定程度能保护爬虫
user_agent = [
    "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
    "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
    "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)"]

def get_info():
    csvheader=['名称','面积','朝向','户型','位置','楼层','是否有电梯','建成时间',' 门锁','绿化']
    with open('wuhan_ziru.csv', 'a+', newline='') as csvfile:
        writer  = csv.writer(csvfile)
        writer.writerow(csvheader)
        for i in range(1,50):  #总共有50页
            print('正在爬取自如第%s页'%i)
            timelist=[1,2,3]
            print('有点累了,需要休息一下啦(¬㉨¬)')
            time.sleep(random.choice(timelist))   #休息1-3秒,防止给对方服务器过大的压力!!!
    .......

def get_house_info(href):
    #得到房屋的信息
    time.sleep(1)
    headers = {'User-Agent': random.choice(user_agent)}
    response = requests.get(url=href, headers=headers)
    response=response.content.decode('utf-8', 'ignore')
    soup = BeautifulSoup(response, 'lxml')
    name = soup.find('h1', class_='Z_name').text
    sinfo=soup.find('div', class_='Z_home_b clearfix').find_all('dd')
    area=sinfo[0].text
    orien=sinfo[1].text
    area_type=sinfo[2].text
    dinfo=soup.find('ul',class_='Z_home_o').find_all('li')
    location=dinfo[0].find('span',class_='va').text
    loucen=dinfo[1].find('span',class_='va').text
    dianti=dinfo[2].find('span',class_='va').text
    niandai=dinfo[3].find('span',class_='va').text
    mensuo=dinfo[4].find('span',class_='va').text
    lvhua=dinfo[5].find('span',class_='va').text
    ['名称','面积','朝向','户型','位置','楼层','是否有电梯','建成时间',' 门锁','绿化']
    room_info=[name,area,orien,area_type,location,loucen,dianti,niandai,mensuo,lvhua]
    return room_info

if __name__ == '__main__':
    get_info()
正在爬取自如第1页
有点累了,需要休息一下啦(¬㉨¬)
开始干活咯(๑>؂<๑)
正在爬取https://wh.ziroom.com/x/807988393.html
正在爬取https://wh.ziroom.com/x/807004676.html
正在爬取https://wh.ziroom.com/x/807953610.html
正在爬取https://wh.ziroom.com/x/808037785.html
正在爬取https://wh.ziroom.com/x/807919198.html
正在爬取https://wh.ziroom.com/x/793749918.html
正在爬取https://wh.ziroom.com/x/767210815.html
正在爬取https://wh.ziroom.com/x/789408295.html
正在爬取https://wh.ziroom.com/x/808031730.html
正在爬取https://wh.ziroom.com/x/768041620.html
正在爬取https://wh.ziroom.com/x/795938432.html
正在爬取https://wh.ziroom.com/x/807215719.html
正在爬取https://wh.ziroom.com/x/807853636.html
正在爬取https://wh.ziroom.com/x/779302932.html
正在爬取https://wh.ziroom.com/x/808088052.html
正在爬取https://wh.ziroom.com/x/765812075.html
正在爬取https://wh.ziroom.com/x/807251426.html
正在爬取https://wh.ziroom.com/x/785748194.html
正在爬取https://wh.ziroom.com/x/745251276.html
正在爬取https://wh.ziroom.com/x/790382757.html
正在爬取https://wh.ziroom.com/x/792641596.html
正在爬取https://wh.ziroom.com/x/807917147.html
正在爬取https://wh.ziroom.com/x/793643315.html
正在爬取https://wh.ziroom.com/x/775808119.html
正在爬取https://wh.ziroom.com/x/790675697.html
正在爬取https://wh.ziroom.com/x/807995540.html
正在爬取https://wh.ziroom.com/x/807734055.html
正在爬取https://wh.ziroom.com/x/750362109.html
正在爬取https://wh.ziroom.com/x/786190029.html
正在爬取自如第2页
有点累了,需要休息一下啦(¬㉨¬)
开始干活咯(๑>؂<๑)

5、实践项目2:36kr信息抓取与邮件发送

36kr官网:https://36kr.com/newsflashes

本节的内容就是爬取36kr的信息然后通过邮件发送

具体路径是:

python爬虫–>通过邮件A发送–>服务器—>通过邮件B接收

通过观察我们发现 消息的标签为

中国平安:推动新方正集团聚集医疗健康等核心业务发展

因此我们爬取的代码为

需要注意的是,邮箱发送消息用的HTML的模式,而HTML模式下换行符号为 < br>

def main(): 
    print('正在爬取数据')
    url = 'https://36kr.com/newsflashes'
    headers = {'User-Agent': random.choice(user_agent)}
    response = requests.get(url, headers=headers)
    response=response.content.decode('utf-8', 'ignore')
    soup = BeautifulSoup(response, 'lxml')
    news = soup.find_all('a', class_='item-title')  
    news_list=[]
    for i in news:
        title=i.get_text()
        href='https://36kr.com'+i['href']
        news_list.append(title+'<br>'+href)
    info='<br></br>'.join(news_list)

接下来就是配置邮箱的发送信息

smtpserver = 'smtp.qq.com'

# 发送邮箱用户名密码
user = ''
password = ''

# 发送和接收邮箱
sender = ''
receive = ''

def send_email(content):
    # 通过QQ邮箱发送
    title='36kr快讯'
    subject = title
    msg = MIMEText(content, 'html', 'utf-8')
    msg['Subject'] = Header(subject, 'utf-8')
    msg['From'] = sender
    msg['To'] = receive
    # SSL协议端口号要使用465
    smtp = smtplib.SMTP_SSL(smtpserver, 465)  # 这里是服务器端口!
    # HELO 向服务器标识用户身份
    smtp.helo(smtpserver)
    # 服务器返回结果确认
    smtp.ehlo(smtpserver)
    # 登录邮箱服务器用户名和密码
    smtp.login(user, password)
    smtp.sendmail(sender, receive, msg.as_string())
    smtp.quit()

最后我们的整个代码文件为

import requests
import random
from bs4 import BeautifulSoup
import smtplib  # 发送邮件模块
from email.mime.text import MIMEText  # 定义邮件内容
from email.header import Header  # 定义邮件标题

smtpserver = 'smtp.qq.com'

# 发送邮箱用户名密码
user = ''
password = ''

# 发送和接收邮箱
sender = ''
receive = ''

user_agent = [
    "Mozilla/5.0 (Macintosh; U; Intel Mac OS X 10_6_8; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows; U; Windows NT 6.1; en-us) AppleWebKit/534.50 (KHTML, like Gecko) Version/5.1 Safari/534.50",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0",
    "Mozilla/5.0 (Windows NT 10.0; WOW64; Trident/7.0; .NET4.0C; .NET4.0E; .NET CLR 2.0.50727; .NET CLR 3.0.30729; .NET CLR 3.5.30729; InfoPath.3; rv:11.0) like Gecko",
    "Mozilla/5.0 (compatible; MSIE 9.0; Windows NT 6.1; Trident/5.0)",
    "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.0; Trident/4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 6.0)",
    "Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10.6; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
    "Opera/9.80 (Macintosh; Intel Mac OS X 10.6.8; U; en) Presto/2.8.131 Version/11.11",
    "Opera/9.80 (Windows NT 6.1; U; en) Presto/2.8.131 Version/11.11",
    "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_7_0) AppleWebKit/535.11 (KHTML, like Gecko) Chrome/17.0.963.56 Safari/535.11",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Maxthon 2.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; TencentTraveler 4.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; The World)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Trident/4.0; SE 2.X MetaSr 1.0; SE 2.X MetaSr 1.0; .NET CLR 2.0.50727; SE 2.X MetaSr 1.0)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; 360SE)",
    "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; Avant Browser)"]

def main():
    print('正在爬取数据')
    url = 'https://36kr.com/newsflashes'
    headers = {'User-Agent': random.choice(user_agent)}
    response = requests.get(url, headers=headers)
    response=response.content.decode('utf-8', 'ignore')
    soup = BeautifulSoup(response, 'lxml')
    news = soup.find_all('a', class_='item-title')  
    news_list=[]
    for i in news:
        title=i.get_text()
        href='https://36kr.com'+i['href']
        news_list.append(title+'<br>'+href)
    info='<br></br>'.join(news_list)
    print('正在发送信息')
    send_email(info)

def send_email(content):
    # 通过QQ邮箱发送
    title='36kr快讯'
    subject = title
    msg = MIMEText(content, 'html', 'utf-8')
    msg['Subject'] = Header(subject, 'utf-8')
    msg['From'] = sender
    msg['To'] = receive
    # SSL协议端口号要使用465
    smtp = smtplib.SMTP_SSL(smtpserver, 465)  # 这里是服务器端口!
    # HELO 向服务器标识用户身份
    smtp.helo(smtpserver)
    # 服务器返回结果确认
    smtp.ehlo(smtpserver)
    # 登录邮箱服务器用户名和密码
    smtp.login(user, password)
    smtp.sendmail(sender, receive, msg.as_string())
    smtp.quit()

if __name__ == '__main__':
    main()

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值