网络爬虫工具

http://hi.baidu.com/whhzthfnayhntwe/item/a4f9ae056f08b012cc34eadc

Ruby Web Spidering and Data extraction

Anemone:

http://anemone.rubyforge.org

Example:

Anemone.crawl("http://www.example.com/") do |anemone|

anemone.on_every_page do |page|

puts page.url

end

end

Anemone提供了5个动词

after_crawl - 在crawl完了之后在所有抓取到的页面上运行一个block

focus_crawl - 用一个block去选择每个页面跟随哪些链接

on_every_page - 对每个页面运行block

on_pages_like - 给定一个模式,URL匹配的页面才运行block

skip_links_like - 跳过URL模式匹配的页面

每个page对象包含下面属性

url - page的URL

aliases - 重定位到这个page的URI,或这个page重定位到的页面

headers - HTTP响应头部信息

code - HTTP响应码

doc - 页面的Nokogiri::HTML::Document

links - 页面上的指向同样域名的所有URL数组



---------------------------------------------------------------------------

Mechanize:

http://mechanize.rubyforge.org



examples:

require 'rubygems'

require 'mechanize'

#创建实例

agent = Mechanize.new

#加载网页

page = agent.get("http://www.inruby.com")

#使用Mechanize::Page方法

page.title

page.content_type

page.encoding

page.images

page.links

page.forms

page.frames

page.iframes

page.labels

signup_page = page.link_with(:href =>/signup/).click

#使用Mechanize::Form

u_form = signup_page.form_with(:action =>/users/)

u_form['user[login]'] = 'maiaimi'

u_form['user[password]'] = 'maiami'

u_form['user[password_confirmation]'] = 'maiami'

u_form.submit

---------------------------------------------------------------------------------------------

#example2

This is an example of how to access a login protected site with WWW ::Mechanize. In this example, the login form has two fields named user and password. In other words, the HTML contains the following code:



1 <input name="user" .../>

2 <input name="password" .../>



Note that this example also shows how to enable WWW ::Mechanize logging and how to capture the HTML response:



1 require 'rubygems'

2 require 'logger'

3 require 'mechanize'

4

5 agent = WWW::Mechanize.new{|a| a.log = Logger.new(STDERR) }

agent = Mechanize.new({|a| a.log = Logger.new(STDERR)}

6 #agent.set_proxy('a-proxy', '8080')

7 page = agent.get 'http://bobthebuilder.com'

8

9 form = page.forms.first

10 form.user = 'bob'

11 form.password = 'password'

12

13 page = agent.submit form

14

15 output = File.open("output.html", "w") { |file| file << page.body }



Use the search method to scrape the page content. In this example I extract all text contained by span elements, which in turn are contained by a table element having a class attribute equal to ‘list-of-links’:



1 puts page.search("//table[@class='list-of-links']//span/text()") # do |row|



Mechanize Tips

1. agent alias

irb(main):071:0> Mechanize::AGENT_ALIASES.keys

=> ["Mechanize", "Linux Firefox", "Mac Mozilla", "Linux Mozilla", "Windows IE 6", "iPhone", "Linux Konqueror", "Windows IE 7", "Mac FireFox", "Mac Safari", "Windows Mozilla"]

2. reassign Mechanize's html parser

Mechanize.html_parser = Hpricot

agent = Mechanize.new

agent.user_agent_alias = 'Windows IE 7'
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值