用wget一次下载页面中所有链接
wget还是挺强的,两条命令就可以下载页面中所有的链接了,不用你一条一条的另存了,而且还包括一些隐藏的也可以下载下来。如js文件和css文件
命令如下:
[wolf@dev wolfchina.bokee.com]$wget http://wolfchina.bokee.com/ [wolf@dev wolfchina.bokee.com]$wget -i index.html -F -B http://wolfchina.bokee.com/ |
参数的详细描述如下:
-i file --input-file=file Read URLs from file, in which case no URLs need to be on the com- mand line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. The file need not be an HTML document (but no harm if it is)---it is enough if the URLs are just listed sequentially. However, if you specify --force-html, the document will be regarded as html. In that case you may have problems with relative links, which you can solve either by adding " |