In this little training challenge, you are going to learn about the Robots_exclusion_standard.
The robots.txt file is used by web crawlers to check if they are allowed to crawl and index your website or only parts of it.
Sometimes these files reveal the directory structure instead protecting the content from being crawled.Enjoy!
网络爬虫程序使用robots.txt文件来检查是否允许爬网,使用dirsearch扫描这个ip下所有的文件
扫出robots.txt这个文件,并且回显200
访问,告知/fl0g.php
访问http://111.200.241.244:51534/fl0g.php