1 1.利用requests.get(url)获取网页页面的html文件 2 >>> import requests 3 >>> url='https://www.baidu.com/' 4 >>> res=requests.get(url) 5 >>> res.encoding='utf-8' 6 >>> res.text 7 <!DOCTYPE html>\r\n<!--STATUS OK--><html> <head><meta http-equiv=content-type content=text/html;charset=utf-8><meta http-equiv=X-UA-Compatible content=IE=Edge><meta content=always name=referrer><link rel=stylesheet type=text/css href=https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css><title>百度一下,你就知道</title></head> <body link=#0000cc> <div id=wrapper> <div id=head> <div class=head_wrapper> <div class=s_form> <div class=s_form_wrapper> <div id=lg> <img hidefocus=true src=//www.baidu.com/img/bd_logo1.png width=270 height=129> </div> <form id=form name=f action=//www.baidu.com/s class=fm> <input type=hidden name=bdorz_come value=1> <input type=hidden name=ie value=utf-8> <input type=hidden name=f value=8> <input type=hidden name=rsv_bp value=1> <input type=hidden name=rsv_idx value=1> <input type=hidden name=tn value=baidu><span class="bg s_ipt_wr"><input id=kw name=wd class=s_ipt value maxlength=255 autocomplete=off autofocus=autofocus></span><span class="bg s_btn_wr"><input type=submit id=su value=百度一下 class="bg s_btn" autofocus></span> </form> </div> </div> <div id=u1> <a href=http://news.baidu.com name=tj_trnews class=mnav>新闻</a> <a href=https://www.hao123.com name=tj_trhao123 class=mnav>hao123</a> <a href=http://map.baidu.com name=tj_trmap class=mnav>地图</a> <a href=http://v.baidu.com name=tj_trvideo class=mnav>视频</a> <a href=http://tieba.baidu.com name=tj_trtieba class=mnav>贴吧</a> <noscript> <a href=http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u=http%3A%2F%2Fwww.baidu.com%2f%3fbdorz_come%3d1 name=tj_login class=lb>登录</a> </noscript> <script>document.write(\'<a href="http://www.baidu.com/bdorz/login.gif?login&tpl=mn&u=\'+ encodeURIComponent(window.location.href+ (window.location.search === "" ? "?" : "&")+ "bdorz_come=1")+ \'" name="tj_login" class="lb">登录</a>\');\r\n </script> <a href=//www.baidu.com/more/ name=tj_briicon class=bri style="display: block;">更多产品</a> </div> </div> </div> <div id=ftCon> <div id=ftConw> <p id=lh> <a href=http://home.baidu.com>关于百度</a> <a href=http://ir.baidu.com>About Baidu</a> </p> <p id=cp>©2017 Baidu <a href=http://www.baidu.com/duty/>使用百度前必读</a> <a href=http://jianyi.baidu.com/ class=cp-feedback>意见反馈</a> 京ICP证030173号 <img src=//www.baidu.com/img/gs.gif> </p> </div> </div> </div> </body> </html>\r\n' 8 9 >>> type(res) 10 <class 'requests.models.Response'> 11 12 2.利用BeautifulSoup的HTML解析器,生成结构树 13 14 >>> from bs4 import BeautifulSoup 15 >>> soup = BeautifulSoup(res.text,'html.parser') 16 17 3.找出特定标签的html元素 18 >>> soup.p 19 <p id="lh"> <a href="http://home.baidu.com">关于百度</a> <a href="http://ir.baidu.com">About Baidu</a> </p> 20 >>> soup.head 21 <head><meta content="text/html;charset=utf-8" http-equiv="content-type"/><meta content="IE=Edge" http-equiv="X-UA-Compatible"/><meta content="always" name="referrer"/><link href="https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css" rel="stylesheet" type="text/css"/><title>百度一下,你就知道</title></head> 22 >>> soup.p.name 23 'p' 24 >>> soup.p.attrs 25 {'id': 'lh'} 26 >>> soup.p.contents 27 [' ', <a href="http://home.baidu.com">关于百度</a>, ' ', <a href="http://ir.baidu.com">About Baidu</a>, ' '] 28 >>> soup.p.text 29 ' 关于百度 About Baidu ' 30 >>> soup.p.string 31 >>> soup.select('li') 32 [] 33 soup.select('.news-list-title')[0].text 34 35 #取链接 36 soup.select('li')[1].a.attrs['href'] 37 38 4.取得含有特定CSS属性的元素 39 >>> soup.select('head') 40 41 [<head><meta content="text/html;charset=utf-8" http-equiv="content-type"/><meta content="IE=Edge" http-equiv="X-UA-Compatible"/><meta content="always" name="referrer"/><link href="https://ss1.bdstatic.com/5eN1bjq8AAUYm2zgoY3K/r/www/cache/bdorz/baidu.min.css" rel="stylesheet" type="text/css"/><title>百度一下,你就知道</title></head>] 42 >>> soup.select('li') 43 44 [] 45 46 47 5.练习: 48 取出h1标签的文本 49 取出a标签的链接 50 取出所有li标签的所有内容 51 取出一条新闻的标题、链接、发布时间、来源 52 53 取出h1标签的文本 54 soup.h1.text 55 56 取出a标签的链接 57 soup.a.attrs['href'] 58 59 取出所有li标签的所有内容 60 for i in soup.select('li'): 61 print(i.contents) 62 63 取出第2个li标签的a标签的第3个div标签的属性 64 soup.select('li')[1].a.select('div')[2] 65 66 取出一条新闻的标题、链接、发布时间、来源 67 68 #取新闻标题 69 soup.select('.news-list-title')[0].text 70 71 #取链接 72 soup.select('li')[1].a.attrs['href'] 73 74 #取发布时间 75 soup.select('.news-list-info')[0].contents[0].text 76 77 #取来源 78 soup.select('.news-list-info')[0].contents[1].text