Using robots.txt To Control Search Engine Spiders

 

转自 http://www.activewebhosting.com/faq/web-robots.html

 

What are robots and spiders?

Search engines such as Google and Yahoo! use what is called 'robots' or 'spiders' to visit pages on the internet and then automatically add them to their search database. Many people even add their sites manually rather than wait for a robot or spider to visit their web site. When you put a web page on your web server, it can take some time for your site to show up in their search engine. Once the page is entered into their database, however, it can take a long time also for the page to be removed should the page move or be taken off the server. For more information on how the major search engine spiders work, please see the pages below:

However, there may be times where you have information that you do not want to share with everyone or have the search engines put in their database. You may even have a whole directory you wish to keep secret.

One way to keep a search engine from adding your pages to their database is to put a file called robots.txt in the directory where the pages you wish to protect exist. While this is not a fool-proof way to protect your pages, it may help keep them from showing up in most search engine databases, at the very least.

How do I create a robots.txt file?

You can create a robots.txt file from any Linux text editor or any text editor that saves to Unix format. This is important as the file must have Unix style line breaks. Please see Text Editors You Can Use To Create CGI Scripts for more information. Note that the robots.txt file must be in the root directory (and not in a sub directory) of your CGI or web server. You can have one in each if you want. Put a robots.txt file in the root directory of your CGI server to control spidering of files on only that server. Put a robots.txt file in the root directory of your web server to control spidering of files on your web server only. Your robots.txt file will only affect your own server(s) and not anyone else.

The robots.txt file usually needs only two fields: User-agent and Disallow . Here are a couple examples you can put in your robots.txt file. You can add more than one User-agent or Disallow field to your robots.txt file.

Allow all robots:

User-agent: *
Disallow:

This will allow all robots to visit all pages in the directory. Note nothing was entered for Disallow even though it was included in the robots.txt file.

Specify rules for a certain search engine:

User-agent: googlebot
Disallow:

This specifies Disallow rules to be followed only by google robots that may visit your site. Note nothing was entered for Disallow even though it was included in the robots.txt file. This means all files can be added to google's search database.

Keep all robots out of directory:

User-agent: *
Disallow: /

This keeps all robots from adding any of the pages in the directory the robot.txt file is placed. Note the slash / in the Disallow field means all files.

Ban a certain search engine from all directories:

User-agent: googlebot
Disallow: /

This would keep google from adding any pages in that directory to it's search engine database.

Protecting only certain files:

User-Agent: *
Disallow: /images/
Disallow: email.html

This keeps all robots from adding all files in the images directory and the email.html file from being added to their search database. Note that using Disallow: /images/ will cover the subdirectories as well, so there would be no need to add another for each subdirectory in the images directory. Spiders will not go into the images directory at all nor visit any of the directories or files inside it.

We recommend you take a look at an example robots.txt file from PimpSoft . You may want to copy this file and edjust it to your site's needs. This file helps to keep certain harmful robots (spiders) off your site and control how these robots spider your site. In this way, your pages can be indexed most efficiently.

Once you have constructed and saved your robots.txt file, upload it to your web server directory which you wish to protect using your FTP program.

Checking robot.txt Validity

Once you've uploaded the robot.txt file, it's usually a good idea to check the validity of the file and be sure there are no problems. You can do this using the one of the following robots.txt validators. Please be sure your robots.txt file is uploaded to your web site and provide the proper URL to the file, such as http://yourdomain.com/robots.txt .

Specifying Robot Rules in HTML Meta Tags

Alternatively (or even additionally) you can specify the rules in your HTML file itself, within a meta tag . This tag appears in the head tag. Here is an example:

<head>
<meta name="robots" content="noindex,nofollow">
<title>My Page</title>
</head>


In the content= area within the quotes you have a few choices you can add. The first word before the comma you can use either index meaning the robot will add the page to the search engine database, and noindex meaning the robot will not add the page to the search engine database.

The second word after the comma in the content= area you have two choices. You can use follow to mean it will also visit all other links you have on that page and catalog them (providing there is no robot meta tag preventing it, in which case it will skip over those), or nofollow meaning the robot will act on only that page and not follow links you have on that page.

Which do I use, robot.txt or in my meta tag?

The robots.txt method is best if you want to keep robots from indexing a whole directory or even protect certain files. It also lets you change things from one file rather than from each .html file you have. This is good for keeping your pages from being added to search engines.

The robot meta tag is best if you want search engines to add your pages to the search engines.

Do remember though that spiders will only find content on your pages and pages that are linked to. If any of your pages aren't linked to, then spiders may not find and index those pages.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
ava实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),可运行高分资源 Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现的毕业设计&&课程设计(包含运行文档+数据库+前后端代码),Java实现
C语言是一种广泛使用的编程语言,它具有高效、灵活、可移植性强等特点,被广泛应用于操作系统、嵌入式系统、数据库、编译器等领域的开发。C语言的基本语法包括变量、数据类型、运算符、控制结构(如if语句、循环语句等)、函数、指针等。下面详细介绍C语言的基本概念和语法。 1. 变量和数据类型 在C语言中,变量用于存储数据,数据类型用于定义变量的类型和范围。C语言支持多种数据类型,包括基本数据类型(如int、float、char等)和复合数据类型(如结构体、联合等)。 2. 运算符 C语言中常用的运算符包括算术运算符(如+、、、/等)、关系运算符(如==、!=、、=、<、<=等)、逻辑运算符(如&&、||、!等)。此外,还有位运算符(如&、|、^等)和指针运算符(如、等)。 3. 控制结构 C语言中常用的控制结构包括if语句、循环语句(如for、while等)和switch语句。通过这些控制结构,可以实现程序的分支、循环和多路选择等功能。 4. 函数 函数是C语言中用于封装代码的单元,可以实现代码的复用和模块化。C语言中定义函数使用关键字“void”或返回值类型(如int、float等),并通过“{”和“}”括起来的代码块来实现函数的功能。 5. 指针 指针是C语言中用于存储变量地址的变量。通过指针,可以实现对内存的间接访问和修改。C语言中定义指针使用星号()符号,指向数组、字符串和结构体等数据结构时,还需要注意数组名和字符串常量的特殊性质。 6. 数组和字符串 数组是C语言中用于存储同类型数据的结构,可以通过索引访问和修改数组中的元素。字符串是C语言中用于存储文本数据的特殊类型,通常以字符串常量的形式出现,用双引号("...")括起来,末尾自动添加'\0'字符。 7. 结构体和联合 结构体和联合是C语言中用于存储不同类型数据的复合数据类型。结构体由多个成员组成,每个成员可以是不同的数据类型;联合由多个变量组成,它们共用同一块内存空间。通过结构体和联合,可以实现数据的封装和抽象。 8. 文件操作 C语言中通过文件操作函数(如fopen、fclose、fread、fwrite等)实现对文件的读写操作。文件操作函数通常返回文件指针,用于表示打开的文件。通过文件指针,可以进行文件的定位、读写等操作。 总之,C语言是一种功能强大、灵活高效的编程语言,广泛应用于各种领域。掌握C语言的基本语法和数据结构,可以为编程学习和实践打下坚实的基础。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值