ASP.net中用C#开发搜索引擎蜘蛛程

C#特别适合于构造蜘蛛程序,这是因为它已经内置了HTTP访问和多线程的能力,而这两种能力对于蜘蛛程序来说都是非常关键的。下面是构造一个蜘蛛程序要解决的关键问题:
  ⑴ HTML分析:需要某种HTML解析器来分析蜘蛛程序遇到的每一个页面。
  ⑵ 页面处理:需要处理每一个下载得到的页面。下载得到的内容可能要保存到磁盘,或者进一步分析处理。
  ⑶ 多线程:只有拥有多线程能力,蜘蛛程序才能真正做到高效。
  ⑷ 确定何时完成:不要小看这个问题,确定任务是否已经完成并不简单,尤其是在多线程环境下。

  一、HTML解析
  C#语言本身不包含解析HTML的能力,但支持XML解析;不过,XML有着严格的语法,为XML设计的解析器对 HTML来说根本没用,因为HTML的语法要宽松得多。为此,我们需要自己设计一个HTML解析器。本文提供的解析器是高度独立的,你可以方便地将它用于其它用C#处理HTML的场合。
  本文提供的HTML解析器由ParseHTML类实现,使用非常方便:首先创建该类的一个实例,然后将它的Source属性设置为要解析的HTML文档:
ParseHTML parse = new ParseHTML();
parse.Source = "<p>Hello World</p>";


  接下来就可以利用循环来检查HTML文档包含的所有文本和标记。通常,检查过程可以从一个测试Eof方法的while循环开始:
while(!parse.Eof())
{
char ch = parse.Parse();


  Parse方法将返回HTML文档包含的字符--它返回的内容只包含那些非HTML标记的字符,如果遇到了HTML标记,Parse方法将返回0值,表示现在遇到了一个HTML标记。遇到一个标记之后,我们可以用GetTag()方法来处理它。
if(ch==0)
{
HTMLTag tag = parse.GetTag();
}


  一般地,蜘蛛程序最重要的任务之一就是找出各个HREF属性,这可以借助C#的索引功能完成。例如,下面的代码将提取出HREF属性的值(如果存在的话)。
Attribute href = tag["HREF"];
string link = href.Value;


  获得Attribute对象之后,通过Attribute.Value可以得到该属性的值。

二、处理HTML页面
  下面来看看如何处理HTML页面。首先要做的当然是下载HTML页面,这可以通过C#提供的HttpWebRequest类实现:
HttpWebRequest request = (HttpWebRequest)WebRequest.Create(m_uri);
response = request.GetResponse();
stream = response.GetResponseStream();


  接下来我们就从request创建一个stream流。在执行其他处理之前,我们要先确定该文件是二进制文件还是文本文件,不同的文件类型处理方式也不同。下面的代码确定该文件是否为二进制文件。
if( !response.ContentType.ToLower().StartsWith("text/") )
{
SaveBinaryFile(response);
return null;
}
string buffer = "",line;


  如果该文件不是文本文件,我们将它作为二进制文件读入。如果是文本文件,首先从stream创建一个StreamReader,然后将文本文件的内容一行一行加入缓冲区。
reader = new StreamReader(stream);
while( (line = reader.ReadLine())!=null )
{
buffer+=line+"/r/n";
}


  装入整个文件之后,接着就要把它保存为文本文件。
SaveTextFile(buffer);


  下面来看看这两类不同文件的存储方式。
  二进制文件的内容类型声明不以"text/"开头,蜘蛛程序直接把二进制文件保存到磁盘,不必进行额外的处理,这是因为二进制文件不包含HTML,因此也不会再有需要蜘蛛程序处理的HTML链接。下面是写入二进制文件的步骤。
  首先准备一个缓冲区临时地保存二进制文件的内容。 byte []buffer = new byte[1024];


  接下来要确定文件保存到本地的路径和名称。如果要把一个myhost.com网站的内容下载到本地的c:/test文件夹,二进制文件的网上路径和名称是,则本地路径和名称应当是c:/test/images/logo.gif。与此同时,我们还要确保c:/test目录下已经创建了images子目录。这部分任务由convertFilename方法完成。
string filename = convertFilename( response.ResponseUri );


  convertFilename方法分离HTTP地址,创建相应的目录结构。确定了输出文件的名字和路径之后就可以打开读取Web页面的输入流、写入本地文件的输出流。
Stream outStream = File.Create( filename );
Stream inStream = response.GetResponseStream();


  接下来就可以读取Web文件的内容并写入到本地文件,这可以通过一个循环方便地完成。
int l;
do
{
l = inStream.Read(buffer,0,
buffer.Length);
if(l>0)
outStream.Write(buffer,0,l);
} while(l>0);


  写入整个文件之后,关闭输入流、输出流。
outStream.Close();
inStream.Close();


  比较而言,下载文本文件更容易一些。文本文件的内容类型总是以"text/"开头。假设文件已被下载并保存到了一个字符串,这个字符串可以用来分析网页包含的链接,当然也可以保存为磁盘上的文件。下面代码的任务就是保存文本文件。
string filename = convertFilename( m_uri );
StreamWriter outStream = new StreamWriter( filename );
outStream.Write(buffer);
outStream.Close();


  在这里,我们首先打开一个文件输出流,然后将缓冲区的内容写入流,最后关闭文件。


  三、多线程
  多线程使得计算机看起来就象能够同时执行一个以上的操作,不过,除非计算机包含多个处理器,否则,所谓的同时执行多个操作仅仅是一种模拟出来的效果--靠计算机在多个线程之间快速切换达到"同时"执行多个操作的效果。一般而言,只有在两种情况下多线程才能事实上提高程序运行的速度。第一种情况是计算机拥有多个处理器,第二种情况是程序经常要等待某个外部事件。
  对于蜘蛛程序来说,第二种情况正是它的典型特征之一,它每发出一个URL请求,总是要等待文件下载完毕,然后再请求下一个URL。如果蜘蛛程序能够同时请求多个URL,显然能够有效地减少总下载时间。
  为此,我们用DocumentWorker类封装所有下载一个URL的操作。每当一个DocumentWorker的实例被创建,它就进入循环,等待下一个要处理的URL。下面是DocumentWorker的主循环:
while(!m_spider.Quit )
{
m_uri = m_spider.ObtainWork();
m_spider.SpiderDone.WorkerBegin();
string page = GetPage();
if(page!=null)
ProcessPage(page);
m_spider.SpiderDone.WorkerEnd();
}


  这个循环将一直运行,直至Quit标记被设置成了true(当用户点击"Cancel"按钮时,Quit标记就被设置成true)。在循环之内,我们调用ObtainWork获取一个URL。ObtainWork将一直等待,直到有一个URL可用--这要由其他线程解析文档并寻找链接才能获得。Done类利用WorkerBegin和WorkerEnd方法来确定何时整个下载操作已经完成。
  从图一可以看出,蜘蛛程序允许用户自己确定要使用的线程数量。在实践中,线程的最佳数量受许多因素影响。如果你的机器性能较高,或者有两个处理器,可以设置较多的线程数量;反之,如果网络带宽、机器性能有限,设置太多的线程数量其实不一定能够提高性能。
  四、任务完成了吗?
  利用多个线程同时下载文件有效地提高了性能,但也带来了线程管理方面的问题。其中最复杂的一个问题是:蜘蛛程序何时才算完成了工作?在这里我们要借助一个专用的类Done来判断。
  首先有必要说明一下"完成工作"的具体含义。只有当系统中不存在等待下载的URL,而且所有工作线程都已经结束其处理工作时,蜘蛛程序的工作才算完成。也就是说,完成工作意味着已经没有等待下载和正在下载的URL。
  Done类提供了一个WaitDone方法,它的功能是一直等待,直到Done对象检测到蜘蛛程序已完成工作。下面是WaitDone方法的代码。
public void WaitDone()
{
Monitor.Enter(this);
while ( m_activeThreads>0 )
{
Monitor.Wait(this);
}
Monitor.Exit(this);
}


  WaitDone方法将一直等待,直到不再有活动的线程。但必须注意的是,下载开始的最初阶段也没有任何活动的线程,所以很容易造成蜘蛛程序一开始就立即停止的现象。为解决这个问题,我们还需要另一个方法WaitBegin来等待蜘蛛程序进入"正式的"工作阶段。一般的调用次序是:先调用WaitBegin,再接着调用WaitDone,WaitDone将等待蜘蛛程序完成工作。下面是WaitBegin的代码:
public void WaitBegin()
{
Monitor.Enter(this);
while ( !m_started )
{
Monitor.Wait(this);
}
Monitor.Exit(this);
}


  WaitBegin方法将一直等待,直到m_started标记被设置。m_started标记是由 WorkerBegin方法设置的。工作线程在开始处理各个URL之时,会调用WorkerBegin;处理结束时调用WorkerEnd。 WorkerBegin和WorkerEnd这两个方法帮助Done对象确定当前的工作状态。下面是WorkerBegin方法的代码:
public void WorkerBegin()
{
Monitor.Enter(this);
m_activeThreads++;
m_started = true;
Monitor.Pulse(this);
Monitor.Exit(this);
}


  WorkerBegin方法首先增加当前活动线程的数量,接着设置m_started标记,最后调用Pulse方法以通知(可能存在的)等待工作线程启动的线程。如前所述,可能等待Done对象的方法是WaitBegin方法。每处理完一个URL,WorkerEnd方法会被调用:
public void WorkerEnd()
{
Monitor.Enter(this);
m_activeThreads--;
Monitor.Pulse(this);
Monitor.Exit(this);
}


  WorkerEnd方法减小m_activeThreads活动线程计数器,调用Pulse释放可能在等待Done对象的线程--如前所述,可能在等待Done对象的方法是WaitDone方法。

  结束语:本文介绍了开发Internet蜘蛛程序的基础知识,上面提供的源代码将帮助你进一步深入理解本文的主题。这里提供的代码非常灵活,你可以方便地将它用于自己的程序。

 

 

DocumentWorker.cs


using System;
using System.Net;
using System.IO;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// Perform all of the work of a single thread for the spider.
 /// This involves waiting for a URL to becomve available, download
 /// and then processing the page.
 ///
 /// </summary>
 // 完成必须由单个工作线程执行的操作,包括
 // 等待可用的URL,下载和处理页面
 public class DocumentWorker
 {
  /// <summary>
  /// The base URI that is to be spidered.
  /// </summary>
  // 要扫描的基础URI
  private Uri m_uri;

  /// <summary>
  /// The spider that this thread "works for"
  /// </summary>
  //
  private Spider m_spider;

  /// <summary>
  /// The thread that is being used.
  /// </summary>
  private Thread m_thread;

  /// <summary>
  /// The thread number, used to identify this worker.
  /// </summary>
  // 线程编号,用来标识当前的工作线程
  private int m_number;
  

  /// <summary>
  /// The name for default documents.
  /// </summary>
  // 缺省文档的名字
  public const string IndexFile = "index.html";

  /// <summary>
  /// Constructor.
  /// </summary>
  /// <param name="spider">The spider that owns this worker.</param>
  // 构造函数,参数表示拥有当前工作线程的蜘蛛程序
  public DocumentWorker(Spider spider)
  {
   m_spider = spider;
  }

  /// <summary>
  /// This method will take a URI name, such ash /images/blank.gif
  /// and convert it into the name of a file for local storage.
  /// If the directory structure to hold this file does not exist, it
  /// will be created by this method.
  /// </summary>
  /// <param name="uri">The URI of the file about to be stored</param>
  /// <returns></returns>
  // 输入参数是一个URI名称,例如/images/blank.gif.
  // 把它转换成本地文件名称。如果尚未创建相应的目录
  // 结构,则创建之
  private string convertFilename(Uri uri)
  {
   string result = m_spider.OutputPath;
   int index1;
   int index2;   

   // add ending slash if needed
   if( result[result.Length-1]!='//' )
    result = result+"//";

   // strip the query if needed

   String path = uri.PathAndQuery;
   int queryIndex = path.IndexOf("?");
   if( queryIndex!=-1 )
    path = path.Substring(0,queryIndex);

   // see if an ending / is missing from a directory only
   
   int lastSlash = path.LastIndexOf('/');
   int lastDot = path.LastIndexOf('.');

   if( path[path.Length-1]!='/' )
   {
    if(lastSlash>lastDot)
     path+="/"+IndexFile;
   }

   // determine actual filename  
   lastSlash = path.LastIndexOf('/');

   string filename = "";
   if(lastSlash!=-1)
   {
    filename=path.Substring(1+lastSlash);
    path = path.Substring(0,1+lastSlash);
    if(filename.Equals("") )
     filename=IndexFile;
   }

   // 必要时创建目录结构   
   index1 = 1;
   do
   {
    index2 = path.IndexOf('/',index1);
    if(index2!=-1)
    {
     String dirpart = path.Substring(index1,index2-index1);
     result+=dirpart;
     result+="//";
    
    
     Directory.CreateDirectory(result);

     index1 = index2+1;     
    }
   } while(index2!=-1);   

   // attach name
   result+=filename;

   return result;
  }

  /// <summary>
  /// Save a binary file to disk.
  /// </summary>
  /// <param name="response">The response used to save the file</param>
  // 将二进制文件保存到磁盘
  private void SaveBinaryFile(WebResponse response)
  {
   byte []buffer = new byte[1024];

   if( m_spider.OutputPath==null )
    return;

   string filename = convertFilename( response.ResponseUri );
   Stream outStream = File.Create( filename );
   Stream inStream = response.GetResponseStream(); 
   
   int l;
   do
   {
    l = inStream.Read(buffer,0,buffer.Length);
    if(l>0)
     outStream.Write(buffer,0,l);
   }
   while(l>0);
   
   outStream.Close();
   inStream.Close();

  }

  /// <summary>
  /// Save a text file.
  /// </summary>
  /// <param name="buffer">The text to save</param>
  // 保存文本文件
  private void SaveTextFile(string buffer)
  {
   if( m_spider.OutputPath==null )
    return;

   string filename = convertFilename( m_uri );
   StreamWriter outStream = new StreamWriter( filename );
   outStream.Write(buffer);
   outStream.Close();
  }

  /// <summary>
  /// Download a page
  /// </summary>
  /// <returns>The data downloaded from the page</returns>
  // 下载一个页面
  private string GetPage()
  {
   WebResponse response = null;
   Stream stream = null;
   StreamReader reader = null;

   try
   {
    HttpWebRequest request = (HttpWebRequest)WebRequest.Create(m_uri);
       
    response = request.GetResponse();
    stream = response.GetResponseStream(); 

    if( !response.ContentType.ToLower().StartsWith("text/") )
    {
     SaveBinaryFile(response);
     return null;
    }

    string buffer = "",line;

    reader = new StreamReader(stream);
   
    while( (line = reader.ReadLine())!=null )
    {
     buffer+=line+"/r/n";
    }
   
    SaveTextFile(buffer);
    return buffer;
   }
   catch(WebException e)
   {
    System.Console.WriteLine("下载失败,错误:" + e);
    return null;
   }
   catch(IOException e)
   {
    System.Console.WriteLine("下载失败,错误:" + e);
    return null;
   }
   finally
   {
    if( reader!=null ) reader.Close();
    if( stream!=null ) stream.Close();
    if( response!=null ) response.Close();
   }
  }

  /// <summary>
  /// Process each link encountered. The link will be recorded
  /// for later spidering if it is an http or https docuent,
  /// has not been visited before(determined by spider class),
  /// and is in the same host as the original base URL.
  /// </summary>
  /// <param name="link">The URL to process</param>
  private void ProcessLink(string link)
  {
   Uri url;

   // fully expand this URL if it was a relative link
   try
   {
    url = new Uri(m_uri,link,false);
   }
   catch(UriFormatException e)
   {
    System.Console.WriteLine( "Invalid URI:" + link +" Error:" + e.Message);
    return;
   }

   if(!url.Scheme.ToLower().Equals("http") &&
    !url.Scheme.ToLower().Equals("https") )
    return;

   // comment out this line if you would like to spider
   // the whole Internet (yeah right, but it will try)
   if( !url.Host.ToLower().Equals( m_uri.Host.ToLower() ) )
    return;

   //System.Console.WriteLine( "Queue:"+url );
   m_spider.addURI( url );

 

  }

  /// <summary>
  /// Process a URL
  /// </summary>
  /// <param name="page">the URL to process</param>
  private void ProcessPage(string page)
  {
   ParseHTML parse = new ParseHTML();
   parse.Source = page;

   while(!parse.Eof())
   {
    char ch = parse.Parse();
    if(ch==0)
    {
     Attribute a = parse.GetTag()["HREF"];
     if( a!=null )
      ProcessLink(a.Value);
     
     a = parse.GetTag()["SRC"];
     if( a!=null )
      ProcessLink(a.Value);
    }
   }
  }


  /// <summary>
  /// This method is the main loop for the spider threads.
  /// This method will wait for URL's to become available,
  /// and then process them.
  /// </summary>
  public void Process()
  {
   while(!m_spider.Quit )
   {
    m_uri = m_spider.ObtainWork();
    
    m_spider.SpiderDone.WorkerBegin();
    System.Console.WriteLine("Download("+this.Number+"):"+m_uri);   
    string page = GetPage();
    if(page!=null)
     ProcessPage(page);
    m_spider.SpiderDone.WorkerEnd();
   }
  }

  /// <summary>
  /// Start the thread.
  /// </summary>
  public void start()
  {
   ThreadStart ts = new ThreadStart( this.Process );
   m_thread = new Thread(ts);
   m_thread.Start();
  }

  /// <summary>
  /// The thread number. Used only to identify this thread.
  /// </summary>
  public int Number
  {
   get
   {
    return m_number;
   }

   set
   {
    m_number = value;
   }
  
  }
 }
}

 

 

Done.cs


using System;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// This is a very simple object that
 /// allows the spider to determine when
 /// it is done. This object implements
 /// a simple lock that the spider class
 /// can wait on to determine completion.
 /// Done is defined as the spider having
 /// no more work to complete.
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>
 public class Done
 {

  /// <summary>
  /// The number of SpiderWorker object
  /// threads that are currently working
  /// on something.
  /// </summary>
  private int m_activeThreads = 0;

  /// <summary>
  /// This boolean keeps track of if
  /// the very first thread has started
  /// or not. This prevents this object
  /// from falsely reporting that the spider
  /// is done, just because the first thread
  /// has not yet started.
  /// </summary>
  private bool m_started = false;


  
  /// <summary>
  /// This method can be called to block
  /// the current thread until the spider
  /// is done.
  /// </summary>
  public void WaitDone()
  {
   Monitor.Enter(this);
   while ( m_activeThreads>0 )
   {
    Monitor.Wait(this);
   }
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called to wait for the first thread to
  /// start. Once this method returns the
  /// spidering process has begun.
  /// </summary>
  public void WaitBegin()
  {
   Monitor.Enter(this);
   while ( !m_started )
   {
    Monitor.Wait(this);
   }
   Monitor.Exit(this);
  }


  /// <summary>
  /// Called by a SpiderWorker object
  /// to indicate that it has begun
  /// working on a workload.
  /// </summary>
  public void WorkerBegin()
  {
   Monitor.Enter(this);
   m_activeThreads++;
   m_started = true;
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called by a SpiderWorker object to
  /// indicate that it has completed a
  /// workload.
  /// </summary>
  public void WorkerEnd()
  {
   Monitor.Enter(this);
   m_activeThreads--;
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// Called to reset this object to
  /// its initial state.
  /// </summary>
  public void Reset()
  {
   Monitor.Enter(this);
   m_activeThreads = 0;
   Monitor.Exit(this);
  }
 }
}

 

ParseHTML.cs


using System;

namespace Spider
{
 /// <summary>
 /// Summary description for ParseHTML.
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>

 public class ParseHTML:Parse
 {
  public AttributeList GetTag()
  {
   AttributeList tag = new AttributeList();
   tag.Name = m_tag;

   foreach(Attribute x in List)
   {
    tag.Add((Attribute)x.Clone());
   }

   return tag;
  }

  public String BuildTag()
  {
   String buffer="<";
   buffer+=m_tag;
   int i=0;
   while ( this[i]!=null )
   {// has attributes
    buffer+=" ";
    if ( this[i].Value == null )
    {
     if ( this[i].Delim!=0 )
      buffer+=this[i].Delim;
     buffer+=this[i].Name;
     if ( this[i].Delim!=0 )
      buffer+=this[i].Delim;
    }
    else
    {
     buffer+=this[i].Name;
     if ( this[i].Value!=null )
     {
      buffer+="=";
      if ( this[i].Delim!=0 )
       buffer+=this[i].Delim;
      buffer+=this[i].Value;
      if ( this[i].Delim!=0 )
       buffer+=this[i].Delim;
     }
    }
    i++;
   }
   buffer+=">";
   return buffer;
  }

  protected void ParseTag()
  {
   m_tag="";
   Clear();

   // Is it a comment?
   if ( (GetCurrentChar()=='!') &&
    (GetCurrentChar(1)=='-')&&
    (GetCurrentChar(2)=='-') )
   {
    while ( !Eof() )
    {
     if ( (GetCurrentChar()=='-') &&
      (GetCurrentChar(1)=='-')&&
      (GetCurrentChar(2)=='>') )
      break;
     if ( GetCurrentChar()!='/r' )
      m_tag+=GetCurrentChar();
     Advance();
    }
    m_tag+="--";
    Advance();
    Advance();
    Advance();
    ParseDelim = (char)0;
    return;
   }

   // Find the tag name
   while ( !Eof() )
   {
    if ( IsWhiteSpace(GetCurrentChar()) || (GetCurrentChar()=='>') )
     break;
    m_tag+=GetCurrentChar();
    Advance();
   }

   EatWhiteSpace();

   // Get the attributes
   while ( GetCurrentChar()!='>' )
   {
    ParseName = "";
    ParseValue = "";
    ParseDelim = (char)0;

    ParseAttributeName();

    if ( GetCurrentChar()=='>' )
    {
     AddAttribute();
     break;
    }

    // Get the value(if any)
    ParseAttributeValue();
    AddAttribute();
   }
   Advance();
  }


  public char Parse()
  {
   if( GetCurrentChar()=='<' )
   {
    Advance();

    char ch=char.ToUpper(GetCurrentChar());
    if ( (ch>='A') && (ch<='Z') || (ch=='!') || (ch=='/') )
    {
     ParseTag();
     return (char)0;
    }
    else return(AdvanceCurrentChar());
   }
   else return(AdvanceCurrentChar());
  }
 }
}

Spider.cs


using System;
using System.Collections;
using System.Net;
using System.IO;
using System.Threading;

namespace Spider
{
 /// <summary>
 /// The main class for the spider. This spider can be used with the
 /// SpiderForm form that has been provided. The spider is completely
 /// selfcontained. If you would like to use the spider with your own
 /// application just remove the references to m_spiderForm from this file.
 ///
 /// The files needed for the spider are:
 ///
 /// Attribute.cs - Used by the HTML parser
 /// AttributeList.cs - Used by the HTML parser
 /// DocumentWorker - Used to "thread" the spider
 /// Done.cs - Allows the spider to know when it is done
 /// Parse.cs - Used by the HTML parser
 /// ParseHTML.cs - The HTML parser
 /// Spider.cs - This file
 /// SpiderForm.cs - Demo of how to use the spider
 ///
 /// This spider is copyright 2003 by Jeff Heaton. However, it is
 /// released under a Limited GNU Public License (LGPL). You may
 /// use it freely in your own programs. For the latest version visit
 /// http://www.jeffheaton.com.
 ///
 /// </summary>
 public class Spider
 {
  /// <summary>
  /// The URL's that have already been processed.
  /// </summary>
  private Hashtable m_already;

  /// <summary>
  /// URL's that are waiting to be processed.
  /// </summary>
  private Queue m_workload;

  /// <summary>
  /// The first URL to spider. All other URL's must have the
  /// same hostname as this URL.
  /// </summary>
  private Uri m_base;

  /// <summary>
  /// The directory to save the spider output to.
  /// </summary>
  private string m_outputPath;

  /// <summary>
  /// The form that the spider will report its
  /// progress to.
  /// </summary>
  private SpiderForm m_spiderForm;

  /// <summary>
  /// How many URL's has the spider processed.
  /// </summary>
  private int m_urlCount = 0;

  /// <summary>
  /// When did the spider start working
  /// </summary>
  private long m_startTime = 0;

  /// <summary>
  /// Used to keep track of when the spider might be done.
  /// </summary>
  private Done m_done = new Done();  

  /// <summary>
  /// Used to tell the spider to quit.
  /// </summary>
  private bool m_quit;

  /// <summary>
  /// The status for each URL that was processed.
  /// </summary>
  enum Status { STATUS_FAILED, STATUS_SUCCESS, STATUS_QUEUED };


  /// <summary>
  /// The constructor
  /// </summary>
  public Spider()
  {
   reset();
  }

  /// <summary>
  /// Call to reset from a previous run of the spider
  /// </summary>
  public void reset()
  {
   m_already = new Hashtable();
   m_workload = new Queue();
   m_quit = false;
  }

  /// <summary>
  /// Add the specified URL to the list of URI's to spider.
  /// This is usually only used by the spider, itself, as
  /// new URL's are found.
  /// </summary>
  /// <param name="uri">The URI to add</param>
  public void addURI(Uri uri)
  {
   Monitor.Enter(this);
   if( !m_already.Contains(uri) )
   {
    m_already.Add(uri,Status.STATUS_QUEUED);
    m_workload.Enqueue(uri);
   }
   Monitor.Pulse(this);
   Monitor.Exit(this);
  }

  /// <summary>
  /// The URI that is to be spidered
  /// </summary>
  public Uri BaseURI
  {
   get
   {
    return m_base;
   }

   set
   {
    m_base = value;
   }
  }

  /// <summary>
  /// The local directory to save the spidered files to
  /// </summary>
  public string OutputPath
  {
   get
   {
    return m_outputPath;
   }

   set
   {
    m_outputPath = value;
   }
  }

  /// <summary>
  /// The object that the spider reports its
  /// results to.
  /// </summary>
  public SpiderForm ReportTo
  {
   get
   {
    return m_spiderForm;
   }

   set
   {
    m_spiderForm = value;
   }
  }

  /// <summary>
  /// Set to true to request the spider to quit.
  /// </summary>
  public bool Quit
  {
   get
   {
    return m_quit;
   }

   set
   {
    m_quit = value;
   }
  }

  /// <summary>
  /// Used to determine if the spider is done,
  /// this object is usually only used internally
  /// by the spider.
  /// </summary>
  public Done SpiderDone
  {
   get
   {
    return m_done;
   }

  }

  /// <summary>
  /// Called by the worker threads to obtain a URL to
  /// to process.
  /// </summary>
  /// <returns>The next URL to process.</returns>
  public Uri ObtainWork()
  {
   Monitor.Enter(this);
   while(m_workload.Count<1)
   {
    Monitor.Wait(this);
   }


   Uri next = (Uri)m_workload.Dequeue();
   if(m_spiderForm!=null)
   {
    m_spiderForm.SetLastURL(next.ToString());
    m_spiderForm.SetProcessedCount(""+(m_urlCount++));
    long etime = (System.DateTime.Now.Ticks-m_startTime)/10000000L;
    long urls = (etime==0)?0:m_urlCount/etime;
    m_spiderForm.SetElapsedTime( etime/60 + " minutes (" + urls +" urls/sec)" );
   }

   Monitor.Exit(this);
   return next;
  }

  /// <summary>
  /// Start the spider.
  /// </summary>
  /// <param name="baseURI">The base URI to spider</param>
  /// <param name="threads">The number of threads to use</param>
  public void Start(Uri baseURI,int threads)
  {
   // init the spider
   m_quit = false;

   m_base = baseURI;
   addURI(m_base);
   m_startTime = System.DateTime.Now.Ticks;;
   m_done.Reset();
  
   // startup the threads

   for(int i=1;i<threads;i++)
   {    
    DocumentWorker worker = new DocumentWorker(this);
    worker.Number = i;
    worker.start();
   }

   // now wait to be done

   m_done.WaitBegin();
   m_done.WaitDone();   
  }
 }
}

 

 

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值