[Python学习] 简单网络爬虫抓取博客文章及思想介绍

原文链接:http://www.2cto.com/kf/201410/340479.html


前面一直强调Python运用到网络爬虫方面非常有效,这篇文章也是结合学习的Python视频知识及我研究生数据挖掘方向的知识.从而简单介绍下Python是如何爬去网络数据的,文章知识非常简单,但是也分享给大家,就当简单入门吧!同时只分享知识,希望大家不要去做破坏网络的知识或侵犯别人的原创型文章.主要包括:
1.介绍爬取CSDN自己博客文章的简单思想及过程
2.实现Python源码爬取新浪韩寒博客的316篇文章http://blog.csdn.net/eastmount/article/details/

一.爬虫的简单思想http://blog.csdn.net/eastmount/article/details/

http://blog.csdn.net/eastmount/article/details/最近看刘兵的《Web数据挖掘》知道,在研究信息抽取问题时主要采用的是三种方法:
1.手工方法:通过观察网页及源码找出模式,再编写程序抽取目标数据.但该方法无法处理站点数量巨大情形.
2.包装器归纳:它英文名称叫Wrapper Induction,即有监督学习方法,是半自动的.该方法从手工标注的网页或数据记录集中学习一组抽取规则,从而抽取具有类似格式的网页数据.
3.自动抽取:它是无监督方法,给定一张或数张网页,自动从中寻找模式或语法实现数据抽取,由于不需要手工标注,故可以处理大量站点和网页的数据抽取工作.
这里使用的Python网络爬虫就是简单的数据抽取程序,后面我也将陆续研究一些Python+数据挖掘的知识并写这类文章.首先我想获取的是自己的所有CSDN的博客(静态.html文件),具体的思想及实现方式如下:
第一步 分析csdn博客的源码
首先需要实现的是通过分析博客源码获取一篇csdn的文章,在使用IE浏览器按F12或Google Chrome浏览器右键"审查元素"可以分析博客的基本信息.在网页中http://blog.csdn.net/eastmount链接了作者所有的博文.
显示的源码格式如下:
\
其中..

表示显示的每一篇博客文章,其中第一篇显示如下:
\
它的具体html源代码如下:
\
所以我们只需要获取每页中博客
中的链接,并增加http://blog.csdn.net即可.在通过代码:http://blog.csdn.net/eastmount/article/details/

?
1
2
3
import urllib
open( 'test.html' , 'w+' ).write(content)

但是CSDN会禁止这样的行为,服务器禁止爬取站点内容到别人的网上去.我们的博客文章经常被其他网站爬取,但并没有申明原创出处,还请尊重原创.它显示的错误"403 Forbidden".
PS:据说模拟正常上网能实现爬取CSDN内容,读者可以自己去研究,作者此处不介绍.参考(已验证):
http://blog.csdn.net/eastmount/article/details/http://www.yihaomen.com/article/python/210.htmhttp://blog.csdn.net/eastmount/article/details/
http://www.2cto.com/kf/201405/304829.htmlhttp://blog.csdn.net/eastmount/article/details/
第二步 获取自己所有的文章
这里只讨论思想,假设我们第一篇文章已经获取成功.下面使用Python的find()从上一个获取成功的位置继续查找下一篇文章链接,即可实现获取第一页的所有文章.它一页显示的是20篇文章,最后一页显示剩下的文章.
那么如何获取其他页的文章呢?http://blog.csdn.net/eastmount/article/details/

\
我们可以发现当跳转到不同页时显示的超链接为:http://blog.csdn.net/eastmount/article/details/

?
1
2
3
4
1 页 http: //blog.csdn.net/Eastmount/article/list/1
2 页 http: //blog.csdn.net/Eastmount/article/list/2
3 页 http: //blog.csdn.net/Eastmount/article/list/3
4 页 http: //blog.csdn.net/Eastmount/article/list/4

这思想就非常简单了,其过程简单如下:
for(int i=0;i<4;i++) //获取所有页文章
for(int j=0;j<20;j++) //获取一页文章 注意最后一页文章篇数
GetContent(); //获取一篇文章 主要是获取超链接http://blog.csdn.net/eastmount/article/details/
同时学习过通过正则表达式,在获取网页内容图片过程中格外方便.如我前面使用C#和正则表达式获取图片的文章:http://blog.csdn.net/eastmount/article/details/12235521http://blog.csdn.net/eastmount/article/details/

二.爬取新浪博客http://blog.csdn.net/eastmount/article/details/http://blog.csdn.net/eastmount/article/details/

上面介绍了爬虫的简单思想,但是由于一些网站服务器禁止获取站点内容,但是新浪一些博客还能实现.这里参照"51CTO学院 智普教育的python视频"获取新浪韩寒的所有博客.
地址为:http://blog.sina.com.cn/s/articlelist_1191258123_0_1.html
采用同上面一样的方式我们可以获取每个

..
中包含着一篇文章的超链接,如下图所示:
\
此时通过Python获取一篇文章的代码如下:http://blog.csdn.net/eastmount/article/details/

http://blog.csdn.net/eastmount/article/details/
?
1
2
3
import urllib
content = urllib.urlopen( "http://blog.sina.com.cn/s/blog_4701280b0102eo83.html" ).read()
open( 'blog.html' , 'w+' ).write(content)

可以显示获取的文章,现在需要获取一篇文章的超链接,即:
《论电影的七个元素》——关于我对电…
在没有讲述正则表达式之前使用Python人工获取超链接http,从文章开头查找第一个"<a title",然后接着找到"href="和" .html"即可获取"http:="" blog.sina.com.cn="" s="" blog_4701280b0102eo83.html".代码如下:http:="" blog.csdn.net="" eastmount="" article="" details="" <="" strong="">

<a title",然后接着找到"href="和" .html"即可获取"http:="" blog.sina.com.cn="" s="" blog_4701280b0102eo83.html".代码如下:http:="" blog.csdn.net="" eastmount="" article="" details="" <="" strong="">
?
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
#..
#coding:utf- 8
title = con.find(r'
<p><strong>        下面按照前面讲述的思想通过两层循环即可实现获取所有文章,具体代码如下:http: //blog.csdn.net/eastmount/article/details/</strong></p>
<pre class = " brush:java;" = "" >#coding:utf- 8
import urllib
import time
page= 1
while page<= 7 :
     url=[ '' ]* 50      #新浪播客每页显示 50
     temp= 'http://blog.sina.com.cn/s/articlelist_1191258123_0_' +str(page)+ '.html'
     con =urllib.urlopen(temp).read()
     #初始化
     i= 0
     title=con.find(r'下载获取文章
     j= 0
     while (j<i): #前面 6 页为 50 篇= "" 最后一页为i篇= "" content= "urllib.urlopen(url[j]).read()" open(r&# 39 ;hanhan= "" &# 39 ;+url[j][- 26 :],&# 39 ;w+&# 39 ;).write(content)= "" #写方式打开= "" +表示没有即创建= "" j= "j+1" time.sleep( 1 )= "" else := "" print= "" &# 39 ;download&# 39 ;= "" page= "page+1" &# 39 ;all= "" find= "" end&# 39 ;<= "" pre= "" >
<p><strong>        这样我们就把韩寒的 316 篇新浪博客文章全部爬取成功并能显示每一篇文章,显示如下:<br>
http: //blog.csdn.net/eastmount/article/details/</strong><img width="640" height="300" alt="\" src="http://www.2cto.com/uploadfile/Collfiles/20141005/20141005085306131.jpg"><br>
<strong>        这篇文章主要是简单的介绍了如何使用Python实现爬取网络数据,后面我还将学习一些智能的数据挖掘知识和Python的运用,实现更高效的爬取及获取客户意图和兴趣方面的知识.想实现智能的爬取图片和小说两个软件.<br>
         该文章仅提供思想,希望大家尊重别人的原创成果,不要随意爬取别人的文章并没有含原创作者信息的转载!最后希望文章对大家有所帮助,初学Python,如果有错误或不足之处,请海涵!<br>
     (By:Eastmount 2014 - 9 - 28 中午 11 点 原创CSDN http: //blog.csdn.net/eastmount/)<br>
         参考资料:<br>
         1 .51CTO学院 智普教育的python视频http: //blog.csdn.net/eastmount/article/details/</strong><strong>http://edu.51cto.com/course/course_id-581.htmlhttp://blog.csdn.net/eastmount/article/details/</strong><br>
<strong>        2 .《Web数据挖掘》刘兵著http: //blog.csdn.net/eastmount/article/details/</strong></p>                     
         <script type= "text/javascript" >
         <!--
         $(function(){
           $( '#Article img' ).LoadImage( true , 630 , 560 , 'http://www.2cto.com/statics/images/s_nopic.gif' );   
         })
         
         //-->
         </script>
     <div id= "pages" class = "box_body" >   </div>
     <dl style= "width:650px;height:100px;padding-top:10px;float:left;padding-left:10px" >
         <dd><script type= "text/javascript" >BAIDU_CLB_fillSlot( "771048" );</script><div id= "BAIDU_DUP_wrapper_771048_0" ><iframe id= "cproIframe_771048_4" width= "640" height= "90" src= "http://cb.baidu.com/ecom?adn=0&at=231&aurl=&cad=1&ccd=24&cec=GBK&cfv=11&ch=0&col=zh-CN&conOP=0&cpa=1&dai=4&dis=0&ltr=&ltu=http%3A%2F%2Fwww.2cto.com%2Fkf%2F201410%2F340479.html&lunum=6&n=cnrhucpr&pcs=1349x599&pis=10000x10000&ps=4130x194&psr=1366x768&pss=1349x4237&qn=699833e26eddd14e&rad=&rs=301&rsi0=640&rsi1=90&rsi5=4&rss0=&rss1=&rss2=&rss3=&rss4=&rss5=&rss6=&rss7=&scale=&skin=tabcloud_skin_1&stid=5&td_id=9223372032564469692&tn=baiduCustSTagLinkUnit&tpr=1437788524119&ts=1&xuanting=0&dtm=BAIDU_DUP2_SETJSONADSLOT&dc=2&di=771048&ti=%5BPython%E5%AD%A6%E4%B9%A0%5D%20%E7%AE%80%E5%8D%95%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB%E6%8A%93%E5%8F%96%E5%8D%9A%E5%AE%A2%E6%96%87%E7%AB%A0%E5%8F%8A%E6%80%9D%E6%83%B3%E4%BB%8B%E7%BB%8D%20-%20Python%E5%BC%80%E5%8F%91%E6%8A%80%E6%9C%AF%E6%96%87%E7%AB%A0_%E6%95%99%E7%A8%8B%20-%20%E7%BA%A2%E9%BB%91%E8%81%94%E7%9B%9F&tt=1437788523860.646.706.713" align= "center,center" marginwidth= "0" marginheight= "0" scrolling= "no" frameborder= "0" allowtransparency= "true" ></iframe></div><script charset= "utf-8" src= "http://cb.baidu.com/ecom?di=771048&dcb=BAIDU_DUP_define&dtm=BAIDU_DUP2_SETJSONADSLOT&dbv=2&dci=0&dri=0&dis=0&dai=4&dds=&drs=1&dvi=1430984165&ltu=http%3A%2F%2Fwww.2cto.com%2Fkf%2F201410%2F340479.html&liu=&ltr=&lcr=&ps=4130x194&psr=1366x768&par=1366x728&pcs=1349x599&pss=1349x4237&pis=-1x-1&cfv=11&ccd=24&chi=1&cja=true&cpl=38&cmi=65&cce=true&col=zh-CN&cec=GBK&cdo=-1&tsr=640&tlm=1425355409&tcn=1437788525&tpr=1437788524119&dpt=none&coa=&ti=%5BPython%E5%AD%A6%E4%B9%A0%5D%20%E7%AE%80%E5%8D%95%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB%E6%8A%93%E5%8F%96%E5%8D%9A%E5%AE%A2%E6%96%87%E7%AB%A0%E5%8F%8A%E6%80%9D%E6%83%B3%E4%BB%8B%E7%BB%8D%20-%20Python%E5%BC%80%E5%8F%91%E6%8A%80%E6%9C%AF%E6%96%87%E7%AB%A0_%E6%95%99%E7%A8%8B%20-%20%E7%BA%A2%E9%BB%91%E8%81%94%E7%9B%9F&baidu_id=" ></script><script charset= "utf-8" src= "http://dup.baidustatic.com/painter/union/inlayFixed.js" ></script></dd>
     </dl>
     <dl class = "box_Nsc" >
         <dd class = "lcopy" >点击复制链接 与好友分享!回本站首页</dd>
         <script>
         function copyToClipBoard(){
         var clipBoardContent=document.title + '\r\n' + document.location;
         clipBoardContent+= '\r\n' ;
         window.clipboardData.setData( "Text" ,clipBoardContent);
         alert( "恭喜您!复制成功" );
         }
         </script>
 
         <div class = "Article-Tool" >
   <div class = "bdsharebuttonbox bdshare-button-style0-24" data-bd-bind= "1437788526001" ></div>
<script>window._bd_share_config={ "common" :{ "bdSnsKey" :{}, "bdText" : "" , "bdMini" : "2" , "bdMiniList" : false , "bdPic" : "" , "bdStyle" : "0" , "bdSize" : "24" }, "share" :{}};with(document) 0 [(getElementsByTagName( 'head' )[ 0 ]||body).appendChild(createElement( 'script' )).src= 'http://bdimg.share.baidu.com/static/api/js/share.js?v=89860593.js?cdnversion=' +~(- new Date()/36e5)];</script>
                                 
       </div>
 
         
         
         <dd class = "bbstt" >您对本文章有什么意见或着疑问吗?请到论坛讨论您的关注和建议是我们前行的参考和动力   </dd>
     </dl>
     <dl class = "box_NPre" >
         <dd class = "TLineX" ><strong>上一篇:</strong>程序模拟浏览器请求及会话保持-python实现</dd>
         <dd><strong>下一篇:</strong>python实现扫描论坛回帖,自动发附件(应对求种之类的)</dd>
     </dl>
     <dl class = "linetb" ></dl>
     <dl class = "about" ><dd>相关文章</dd></dl>
                 <div class = "alistline" >python爬虫和数据挖掘</div>
             <div class = "alistline" >Python+MongoDB 爬虫实战</div>
             <div class = "alistline" >python爬虫抓取心得分享  </div>
             <div class = "alistline" >一个简单的爬虫的实现 </div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201308/236113.html" target= "blank" >python网络爬虫抓取图片 </a></div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201401/275152.html" target= "blank" >python爬虫实践之模拟登录</a></div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201402/280606.html" target= "blank" >[Python]网络爬虫( 11 ):亮剑!爬虫框</a></div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201403/283379.html" target= "blank" >python小程序----简单的爬虫</a></div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201403/285930.html" target= "blank" >Python简单抓取原理引出分布式爬虫</a></div>
             <div class = "alistline" ><a href= "http://www.2cto.com/kf/201403/286212.html" target= "blank" >Python玩具总动员之爬虫篇(一):urllib</a></div>
             <dl class = "linetb" ></dl>
     <dl style= "width:650px;height:70px;padding-top:10px;float:left;padding-left:10px" >
         <dd><script type= "text/javascript" >BAIDU_CLB_fillSlot( "182716" );</script><div id= "BAIDU_DUP_wrapper_182716_0" ><iframe id= "cproIframe_182716_5" width= "640" height= "60" src= "http://cb.baidu.com/ecom?adn=3&at=6&aurl=&cad=1&ccd=24&cec=GBK&cfv=11&ch=0&col=zh-CN&conOP=0&cpa=1&dai=5&dis=0&ltr=&ltu=http%3A%2F%2Fwww.2cto.com%2Fkf%2F201410%2F340479.html&lunum=6&n=cnrhucpr&pcs=1349x599&pis=10000x10000&ps=5165x194&psr=1366x768&pss=1349x5242&qn=c617691e173ef0e5&rad=&rs=300&rsi0=640&rsi1=60&rsi5=4&rss0=%23FFFFFF&rss1=%23FFFFFF&rss2=%230000FF&rss3=%23444444&rss4=%23008000&rss5=&rss6=%23e10900&rss7=&scale=&skin=&td_id=9223372032564300810&tn=text_default_640_60&tpr=1437788524119&ts=1&xuanting=0&dtm=BAIDU_DUP2_SETJSONADSLOT&dc=2&di=182716&ti=%5BPython%E5%AD%A6%E4%B9%A0%5D%20%E7%AE%80%E5%8D%95%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB%E6%8A%93%E5%8F%96%E5%8D%9A%E5%AE%A2%E6%96%87%E7%AB%A0%E5%8F%8A%E6%80%9D%E6%83%B3%E4%BB%8B%E7%BB%8D%20-%20Python%E5%BC%80%E5%8F%91%E6%8A%80%E6%9C%AF%E6%96%87%E7%AB%A0_%E6%95%99%E7%A8%8B%20-%20%E7%BA%A2%E9%BB%91%E8%81%94%E7%9B%9F&tt=1437788523860.723.798.799" align= "center,center" marginwidth= "0" marginheight= "0" scrolling= "no" frameborder= "0" allowtransparency= "true" ></iframe></div><script charset= "utf-8" src= "http://cb.baidu.com/ecom?di=182716&dcb=BAIDU_DUP_define&dtm=BAIDU_DUP2_SETJSONADSLOT&dbv=2&dci=0&dri=0&dis=0&dai=5&dds=&drs=1&dvi=1430984165&ltu=http%3A%2F%2Fwww.2cto.com%2Fkf%2F201410%2F340479.html&liu=&ltr=&lcr=&ps=5165x194&psr=1366x768&par=1366x728&pcs=1349x599&pss=1349x5242&pis=-1x-1&cfv=11&ccd=24&chi=1&cja=true&cpl=38&cmi=65&cce=true&col=zh-CN&cec=GBK&cdo=-1&tsr=718&tlm=1425355409&tcn=1437788525&tpr=1437788524119&dpt=none&coa=&ti=%5BPython%E5%AD%A6%E4%B9%A0%5D%20%E7%AE%80%E5%8D%95%E7%BD%91%E7%BB%9C%E7%88%AC%E8%99%AB%E6%8A%93%E5%8F%96%E5%8D%9A%E5%AE%A2%E6%96%87%E7%AB%A0%E5%8F%8A%E6%80%9D%E6%83%B3%E4%BB%8B%E7%BB%8D%20-%20Python%E5%BC%80%E5%8F%91%E6%8A%80%E6%9C%AF%E6%96%87%E7%AB%A0_%E6%95%99%E7%A8%8B%20-%20%E7%BA%A2%E9%BB%91%E8%81%94%E7%9B%9F&baidu_id=" ></script></dd>
     </dl>
 
     <dl style= "width:650px;float:left;padding-left:10px" >
     </dl>
     <dl class = "linetb" ></dl>
     <dl class = "about" ><dd>图文推荐</dd></dl>
     <div class = "picbox" >
                         <dl class = "wbox" >
             <dd class = "npicbox" ><a target= "_blank" href= "http://www.2cto.com/kf/201412/356903.html" ><img src= "http://www.2cto.com/statics/images/nopic.gif" width= "126" height= "90" border= "0" ></a></dd>
             <dd class = "npictext" ><a href= "http://www.2cto.com/kf/201412/356903.html" >使用Python爬取mobi格</a></dd>
         </dl>
                 <dl class = "wbox" >
             <dd class = "npicbox" ><a target= "_blank" href= "http://www.2cto.com/kf/201410/345854.html" ><img src= "http://www.2cto.com/uploadfile/Collfiles/20141024/thumb_126_90_20141024091231232.png" width= "126" height= "90" border= "0" ></a></dd>
             <dd class = "npictext" ><a href= "http://www.2cto.com/kf/201410/345854.html" >Python学习笔记 23 :Dj</a></dd>
         </dl>
                 <dl class = "wbox" >
             <dd class = "npicbox" ><a target= "_blank" href= "http://www.2cto.com/kf/201404/296664.html" ><img src= "http://www.2cto.com/uploadfile/Collfiles/20140429/thumb_126_90_20140429081806177.jpg" width= "126" height= "90" border= "0" ></a></dd>
             <dd class = "npictext" ><a href= "http://www.2cto.com/kf/201404/296664.html" >python午后茶(一)</a></dd>
         </dl>
                 <dl class = "wbox" >
             <dd class = "npicbox" ><a target= "_blank" href= "http://www.2cto.com/kf/201404/292114.html" ><img src= "http://www.2cto.com/uploadfile/Collfiles/20140410/thumb_126_90_2014041010074248.jpg" width= "126" height= "90" border= "0" ></a></dd>
             <dd class = "npictext" ><a href= "http://www.2cto.com/kf/201404/292114.html" >python学习教程(十二</a></dd>
         </dl>
                     </div>
     
 
<!--高速版,加载速度快,使用前需测试页面的兼容性-->
<a id= "changyan_area" ></a><div id= "SOHUCS" style= "width: 650px; height: auto;" ><div id= "SOHU_MAIN" ><div id= "SOHU-comment-main" class = "sohu-comment-wrapper" ><div id= "disp-cy-botr-sohu" style= "overflow: hidden; margin-top: 30px; width: 650px; height: 80px;" ><div class = "disp-botr-content" >
<ins class = "agssp_ad_ins" style= "display:inline-block;width:650px;height:80px" data-agssp-id= "10032" data-agssp-slot= "1000071" ><iframe id= "ag_sug_0" width= "650" height= "80" src= "http://adn.agrantsem.com/agsspshow?l=zh-CN&br=1349x9456&sr=1366x768&c=GBK&p=Win32&fv=11.7%20r700&url=http%3A%2F%2Fwww.2cto.com%2Fkf%2F201410%2F340479.html&ref=&id=10032&slot=1000071&w=650&h=80&uid=rf2dKgftKNsTBDBA&po=1" frameborder= "0" scrolling= "no" ></iframe></ins>
</div></div><div id= "article_info_sohu" >        <div class = "reset-g clear-g section-title-w  section-title-logoutStyle" >
             <div class = "title-join-w" >
                 <div class = "join-wrap-w join-wrap-b" ><strong class = "wrap-name-w wrap-name-b" >我有话说</strong><span class = "wrap-join-w wrap-join-b" >(<em class = "join-strong-gw join-strong-bg" > 0 </em><span node-type= "comments" >条评论</span>)</span></div>
             </div>
             <div class = "title-user-w" >
                 <div node-type= "sohu-pact" class = "title-link-w" style= "display: none;" ><a href= "http://zt.pinglun.sohu.com/s2014/sljyhgy/index.shtml" target= "_blank" >搜狐“我来说两句”用户公约</a></div>
             </div>
         </div>
         </div><div id= "login_sohu" ></div><div id= "comment_sohu" ><div class = "reset-g section-cbox-w" ><div style= "width:1px;height:1px;overflow:hidden;" ><img src= "http://changyan.itc.cn/v2.5/v2015072460/src/css/imgs/vcode.jpg" style= "visibility:hidden;width:1px;height:1px;" ></div><div class = "clear-g cbox-block-w" >
             <div class = "block-head-w" >
                 <div class = "head-img-w" >
                                         <a node-type= "user-avatar" href= "javascript:void(0)" target= "_self" ><img src= "http://assets.changyan.sohu.com/upload/asset/scs/images/pic/pic42_null.gif" onerror= "SOHUCS.isImgErr(this)" width= "42" height= "42" alt= "" ></a>
                     </div>
                 <!--
                                 <div class = "head-gold-w" ><a href= "javascript:void(0)" >金币</a></div>
                 -->
             </div>
         <div class = "block-post-w" ><div class = "post-default-w post-default-b" ><div class = "clear-g default-wrap-w" ><input type= "text" name= "" value= "来说两句吧..." class = "wrap-text-f " ><button class = "btn-fw btn-bf single-btn-bf" >发布</button></div></div></div></div><div node-type= "invalidity-code" class = "invalidity" >您的畅言代码为无效代码,请前往<a href= "http://changyan.kuaizhan.com/" target= "_blank" >畅言官网</a>重新注册</div><div node-type= "prompt-no-privilege" class = "cbox-prompt-w" style= "display: none;" >
             <span class = "prompt-empty-w prompt-empty-b" >等级不够,发表评论升至指定级别才能获得该特权。详情请参见<a node-type= "privilege-intro" href= "javascript:;" >等级说明</a>。</span>
         </div></div></div><div id= "list_sort_sohu" ></div><div id= "list_sohu" topicid= "501358780" >
         <div class = "reset-g section-list-w" >
             <div class = "list-comment-empty-w" >
                 <div class = "empty-prompt-w" ><span class = "prompt-null-w prompt-null-b" >还没有评论,快来抢沙发吧!</span></div>
             </div>
         </div></div><div id= "list_hot" ><iframe frameborder= "0" scrolling= "no" allowtransparency= "false" style= "border: 0px; width: 650px; height: 261px; overflow: hidden; min-height: 0px;" ></iframe></div><div id= "page_sohu" ></div><div id= "more_list_sohu" ></div><div id= "powerby_sohu" >        <div class = "reset-g section-service-w" >
             <div class = "service-wrap-w service-wrap-b" ><a node-type= "powered-by" href= "http://changyan.sohu.com?from=changyan" target= "_blank" >畅言</a></div>
         </div></div></div></div></div>
<script>
   (function(){
     var appid = 'cyrBEfE7C' ,
     conf = 'prod_830794cf494da8b808afb2994cfe0fee' ;
     var doc = document,
     s = doc.createElement( 'script' ),
     h = doc.getElementsByTagName( 'head' )[ 0 ] || doc.head || doc.documentElement;
     s.type = 'text/javascript' ;
     s.charset = 'utf-8' ;
     s.src =  'http://assets.changyan.sohu.com/upload/changyan.js?conf=' + conf + '&appid=' + appid;
     h.insertBefore(s,h.firstChild);
     window.SCS_NO_IFRAME = true ;
   })()
</script>
 
 
     <dl style= "width:650px;float:left;padding-left:10px" >
     /*搜索推荐*/
     var cpro_psid = "u2216938" ;
</script>
<script src= "http://su.bdimg.com/static/dspui/js/f.js" ></script></dd>
     </dl>
     </i):>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值