nutch累积式抓取

 

最近在网上查了好多关于nutch增量式抓取的脚本,但是我觉得和nutch文档中所定义的增量式抓取有出入。应该算是累积式抓取。

好了,首先说一下 背景 :前一段时间搭建好nutch环境后,接下来的工作就是在怎么样 在服务器上进行累积式抓取,即在本地建立大型的索引数据库(有些问章提到分布式数据库,我不太明白)。那么毫无疑问,肯定是要用到nutch的底层命令, 如:generate fetch updatedb 等。可是,如果要人工来做的话,很费时间也很费事,那么想到的办法就是写 脚本 ,通过脚本来建立、维护、更新数据库。

前提 :要保证有大量的URL数据,可以到DMOZ 获取URL数据包(这个包蛮大的,将近300M content.rdf.u8.gz ),然后使用

bin/nutch org.apache.nutch.tools.DmozParser content.example.txt > dmoz/urls

 Nutch自带的工具专门解析Dmoz包的。

接下来就可以使用Inject 命令,将一批url注射到crawldb 数据库中

bin/nutch inject localweb/crawldb dmoz

脚本部分 :思路-----1.generate

                               2.fetch

                                3.updatedb

                                 4.循环1、2、3 通过depths来控制

                                5.整合segments,并删除老的segment ----mergesegs

                                6.invetlinks----产生linkdb数据

                                7.index--------生成索引(这里需要注意一个问题,下面会说)

                                 8.dedup-------优化索引

                                 9.merge---------看名字是整合索引的意思吧,但目前不太明白

好了脚本的思路就是这样,但要注意第七步

:bin/nutch index <index> <crawldb> <linkdb> <segment>

      这里的index目录在使用该命令前应该是不存在的,那么若果是第二次运行这个脚本,index目录肯定存在,这时候怎么办?我的方法是,使用 mv --verbose localweb/index localweb/indexOLD 这样做,不但把index目录删除了,还做了备份。可是又有问题了,如果这个时候TOMCAT是开启的,那么就不能用了,因为TOMCAT线程在保护 index文件,所以我们可以调用TOMCAT的shutdown命令。等处理完毕后,在调用startup命令。

所需变量及解释 :crawldb_dir 、 segments_dir 、linkdb_dir 、index_dir 、TOMCAT_HOME

         depth-----控制循环次数,threads、topN-----根据url得分,爬行排在前多少位以内的url

        adddays----我目前的理解是:nutch自定义url失效时间为30天,在这里可以自己设置失效时间吧

 

以下是nutch官方给出的脚本

# runbot script to run the Nutch bot for crawling and re-crawling.
# Usage: bin/runbot [safe]
#        If executed in 'safe' mode, it doesn't delete the temporary
#        directories generated during crawl. This might be helpful for
#        analysis and recovery in case a crawl fails.
#
# Author: Susam Pal

depth=2
threads=5
adddays=5
topN=15 #Comment this statement if you don't want to set topN value

# Arguments for rm and mv
RMARGS="-rf"
MVARGS="--verbose"

# Parse arguments
if [ "$1" == "safe" ]
then
  safe=yes
fi

if [ -z "$NUTCH_HOME" ]
then
  NUTCH_HOME=.
  echo runbot: $0 could not find environment variable NUTCH_HOME
  echo runbot: NUTCH_HOME=$NUTCH_HOME has been set by the script 
else
  echo runbot: $0 found environment variable NUTCH_HOME=$NUTCH_HOME 
fi

if [ -z "$CATALINA_HOME" ]
then
  CATALINA_HOME=/opt/apache-tomcat-6.0.10
  echo runbot: $0 could not find environment variable NUTCH_HOME
  echo runbot: CATALINA_HOME=$CATALINA_HOME has been set by the script 
else
  echo runbot: $0 found environment variable CATALINA_HOME=$CATALINA_HOME 
fi

if [ -n "$topN" ]
then
  topN="-topN $topN"
else
  topN=""
fi

steps=8
echo "----- Inject (Step 1 of $steps) -----"
$NUTCH_HOME/bin/nutch inject crawl/crawldb urls

echo "----- Generate, Fetch, Parse, Update (Step 2 of $steps) -----"
for((i=0; i < $depth; i++))
do
  echo "--- Beginning crawl at depth `expr $i + 1` of $depth ---"
  $NUTCH_HOME/bin/nutch generate crawl/crawldb crawl/segments $topN \
      -adddays $adddays
  if [ $? -ne 0 ]
  then
    echo "runbot: Stopping at depth $depth. No more URLs to fetch."
    break
  fi
  segment=`ls -d crawl/segments/* | tail -1`

  $NUTCH_HOME/bin/nutch fetch $segment -threads $threads
  if [ $? -ne 0 ]
  then
    echo "runbot: fetch $segment at depth `expr $i + 1` failed."
    echo "runbot: Deleting segment $segment."
    rm $RMARGS $segment
    continue
  fi

  $NUTCH_HOME/bin/nutch updatedb crawl/crawldb $segment
done

echo "----- Merge Segments (Step 3 of $steps) -----"
$NUTCH_HOME/bin/nutch mergesegs crawl/MERGEDsegments crawl/segments/*
if [ "$safe" != "yes" ]
then
  rm $RMARGS crawl/segments
else
  rm $RMARGS crawl/BACKUPsegments
  mv $MVARGS crawl/segments crawl/BACKUPsegments
fi

mv $MVARGS crawl/MERGEDsegments crawl/segments

echo "----- Invert Links (Step 4 of $steps) -----"
$NUTCH_HOME/bin/nutch invertlinks crawl/linkdb crawl/segments/*

echo "----- Index (Step 5 of $steps) -----"
$NUTCH_HOME/bin/nutch index crawl/NEWindexes crawl/crawldb crawl/linkdb \
    crawl/segments/*

echo "----- Dedup (Step 6 of $steps) -----"
$NUTCH_HOME/bin/nutch dedup crawl/NEWindexes

echo "----- Merge Indexes (Step 7 of $steps) -----"
$NUTCH_HOME/bin/nutch merge crawl/NEWindex crawl/NEWindexes

echo "----- Loading New Index (Step 8 of $steps) -----"
${CATALINA_HOME}/bin/shutdown.sh

if [ "$safe" != "yes" ]
then
  rm $RMARGS crawl/NEWindexes
  rm $RMARGS crawl/index
else
  rm $RMARGS crawl/BACKUPindexes
  rm $RMARGS crawl/BACKUPindex
  mv $MVARGS crawl/NEWindexes crawl/BACKUPindexes
  mv $MVARGS crawl/index crawl/BACKUPindex
fi

mv $MVARGS crawl/NEWindex crawl/index

${CATALINA_HOME}/bin/startup.sh

echo "runbot: FINISHED: Crawl completed!"
echo ""

 
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值