TSE-定义URL类

  TSE程序首先必须要做的一件事情是根据一个给定的URL,组成消息体,发送给该URL指向的服务器。为此,定义Url类。

下面是URL类的定义,对应文件Url.h。
enum url_scheme{
SCHEME_HTTP,
SCHEME_FTP,
SCHEME_INVALID
};
class CUrl
{
public:
string m_sUrl; // URL字串
enum url_scheme m_eScheme; // 协议名
string m_sHost; // 主机名
int m_nPort; // 端口号
string m_sPath; // 请求资源
public:
CUrl();
~CUrl();
bool ParseUrl( string strUrl );
private:
void ParseScheme ( const char *url );
};
URL可以是HTTP,FTP等协议开始的字符串,TSE主要是针对HTTP协议,为了不失一般性,在url_scheme中定义了SCHEME_HTTP,SCHEME_FTP,SCHEME_INVALID,分别对应HTTP协议,FTP协议和其他协议。一个URL由6个部分组成:
<scheme>://<net_loc>/<path>;<params>?<query>#<fragment>
除了scheme部分,其他部分可以不在URL中同时出现。
Scheme 表示协议名称,对应于URL类中的m_eScheme.
Net_loc 表示网络位置,包括主机名和端口号,对应于URL类中的m_sHost和m_nPort.
下面四个部分对应于URL类中的m_sPath.
Path 表示URL 路径.
Params 表示对象参数.
Query 表示查询信息,也经常记为request.
Fragment 表示片断标识.
为了程序的简化,URL类的实现主要是解析出net_loc部分,用于组成消息体,发送给服务器。
其中void CUrl::ParseUrlEx(const char *url, char *protocol, intlprotocol, char *host, int lhost, char *request, int lrequest, int*port)执行具体的字符串匹配找出协议名,主机名,请求信息和端口号,找到后赋给Url类的成员变量保存。在URL类中还有些是TSE抓取过程中的细节函数,如char*CUrl::GetIpByHost(const char *host),CUrl::IsValidHost(const char*host),bool CUrl::IsVisitedUrl(const char*url)等,此处就不一一介绍了。读者可以通过阅读TSE的代码获得这部分的具体实现。
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
TSE(Tiny Search Engine) ======================= (Temporary) Web home: http://162.105.80.44/~yhf/Realcourse/ TSE is free utility for non-interactive download of files from the Web. It supports HTTP. According to query word or url, it retrieve results from crawled pages. It can follow links in HTML pages and create output files in Tianwang (http://e.pku.edu.cn/) format or ISAM format files. Additionally, it provies link structures which can be used to rebuild the web frame. --------------------------- Main functions in the TSE: 1) normal crawling, named SE, e.g: crawling all pages in PKU scope. and retrieve results from crawled pages according to query word or url, 2) crawling images and corresponding pages, named ImgSE. --------------------------- INSTALL: 1) execute "tar xvfz tse.XXX.gz" --------------------------- Before running the program, note Note: The program is default for normal crawling (SE). For ImgSE, you should: 1. change codes with the following requirements, 1) In "Page.cpp" file, find two same functions "CPage::IsFilterLink(string plink)" One is for ImgSE whose urls must include "tupian", "photo", "ttjstk", etc. the other is for normal crawling. For ImgSE, remember to comment the paragraph and choose right "CPage::IsFilterLink(string plink)". For SE, remember to open the paragraph and choose righ "CPage::IsFilterLink(string plink)". 2) In Http.cpp file i. find "if( iPage.m_sContentType.find("image") != string::npos )" Comment the right paragraph. 3) In Crawl.cpp file, i. "if( iPage.m_sContentType != "text/html" Comment the right paragraph. ii. find "if(file_length < 40)" Choose right one line. iii. find "iMD5.GenerateMD5( (unsigned char*)iPage.m_sContent.c_str(), iPage.m_sContent.length() )" Comment the right paragraph. iv. find "if (iUrl.IsImageUrl(strUrl))" Comment the right paragraph. 2.sh Clean; (Note not remove link4History.url, you should commnet "rm -f link4History.url" line first) secondly use "link4History.url" as a seed file. "link4History" is produced while normal crawling (SE). --------------------------- EXECUTION: execute "make clean; sh Clean;make". 1) for normal crawling and retrieving ./Tse -c tse_seed.img According to query word or url, retrieve results from crawled pages ./Tse -s 2) for ImgSE ./Tse -c tse_seed.img After moving Tianwang.raw.* data to secure place, execute ./Tse -c link4History.url --------------------------- Detail functions: 1) suporting multithreads crawling pages 2) persistent HTTP connection 3) DNS cache 4) IP block 5) filter unreachable hosts 6) parsing hyperlinks from crawled pages 7) recursively crawling pages h) Outputing Tianwang format or ISAM format files --------------------------- Files in the package Tse --- Tse execute file tse_unreachHost.list --- unreachable hosts according to PKU IP block tse_seed.pku --- PKU seeds tse_ipblock --- PKU IP block ... Directories in the package hlink,include,lib,stack,uri directories --- Parse links from a page --------------------------- Please report bugs in TSE to MAINTAINERS: YAN Hongfei * Created: YAN Hongfei, Network lab of Peking University. * Created: July 15 2003. version 0.1.1 * # Can crawl web pages with a process * Updated: Aug 20 2003. version 1.0.0 !!!! * # Can crawl web pages with multithreads * Updated: Nov 08 2003. version 1.0.1 * # more classes in the codes * Updated: Nov 16 2003. version 1.1.0 * # integrate a new version linkparser provided by XIE Han * # according to all MD5 values of pages content, * for all the pages not seen before, store a new page * Updated: Nov 21 2003. version 1.1.1 * # record all duplicate urls in terms of content MD5
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值