就要放寒假了,总要给自己找点事做,于是想到要将projecteuler 题目下载到本地。我首先想到的是用wget的递归下载,发现projecteuler使用robot.txt阻止了wget下载。看来projecteuler早就想到要防止吾辈这样的小人,但是所谓小人难防,小人自有小人的方法。
我使用的方法就是通过自己编程的方法来实现wget递归下载的功能,当然就不用考虑robot.txt啦 。代码见下,主要是downPage方法,步骤是:
- 得到绝对URL,如果这个URL不属于projecteulr.net域或者已经下载了就直接返回,否则将这个URL添加到下载集合中。
- 如果这个URL是不是HTML页面(图像文件,css,javascript等),直接将它下载到本地对应的目录。下载时文件名要做转换,因为URL中通常包含"?",":"等这个的特殊字符,但是它们一般不允许出现在文件名当中。这里我简单地将这么特殊字符都替换成下划线"_"。
- 如果是HTML页面,那么过程要复杂一些。首先需要搜寻这个页面中所有链接(包括<a>,<img>,<link>中的链接),如果这些链接属于projecteuler.net域,将它们替换成本地路径,并调用downloadPage方法将它们下载到本地,否则保持原地址不变,最后将替换后的页面保存在本地。同样保存在本地的文件名要做转换。
public class ProjectEuler {
private static final String ROOT_URL = "http://projecteuler.net/index.php?section=problems";
private static final String PROJECT_EULER = "http://projecteuler.net";
private static final Pattern LINKS_PATTERN = Pattern.compile("(<(?:a|link|img)\\s+(?:\\w+=\"[^\"]*\"\\s+)*(?:href|src)=\")([^\"]*)(\")");
private static File rootDir = new File("projecteuler");
private static Set<String> downloadingPages = new HashSet<String>();
public static void main(String[] args) throws MalformedURLException, IOException {
downloadPage("", ROOT_URL);
}
private static void downloadPage(String baseUrl, String url) throws MalformedURLException, IOException {
String absUrl = absoluteUrl(baseUrl, url);
if (!absUrl.startsWith(PROJECT_EULER) || downloadingPages.contains(absUrl)) return;
System.out.println("donwloading page: " + absUrl);
downloadingPages.add(absUrl);
HttpURLConnection conn = (HttpURLConnection) new URL(absUrl).openConnection();
InputStream in = conn.getInputStream();
File localFile = new File(rootDir, getPathComp(absUrl));
localFile.getParentFile().mkdirs();
if (conn.getContentType().indexOf("html") != -1) { // html
Reader reader = new InputStreamReader(in, "utf-8");
if (localFile != null) {
StringBuilder sb = new StringBuilder();
while (true) {
int c = reader.read();
if (c == -1) break;
sb.append((char)c);
}
String content = sb.toString();
Matcher matcher = LINKS_PATTERN.matcher(content);
StringBuffer newContent = new StringBuffer();
while (matcher.find()) {
String link = matcher.group(2).replaceAll("&", "&").trim();
// System.out.println("find link: " + link);
if (link.length() > 0) {
downloadPage(absUrl, link);
String absLink = absoluteUrl(absUrl, link);
matcher.appendReplacement(newContent, "$1"+(absLink.startsWith(PROJECT_EULER)?getPathComp(absLink):absLink)+"$3");
} else {
matcher.appendReplacement(newContent, matcher.group());
}
}
matcher.appendTail(newContent);
Writer writer = new OutputStreamWriter(new FileOutputStream(localFile), "utf-8");
writer.write(newContent.toString());
writer.close();
}
} else {
OutputStream os = new FileOutputStream(localFile);
byte[] buf = new byte[1024];
while (true) {
int readSize = in.read(buf);
if (readSize == -1) break;
os.write(buf, 0, readSize);
}
os.close();
}
in.close();
System.out.println("ending downloading page: " + absUrl);
}
private static String correctFileName(String urlName) {
char[] specialChars = "?:&;=".toCharArray();
for (int i = 0; i < specialChars.length; i++) {
urlName = urlName.replace(specialChars[i], '_');
}
return urlName;
}
private static String absoluteUrl(String baseUrl, String url) {
if (url.startsWith("http:") || url.startsWith("https:")) {
return url;
} else {
baseUrl = baseUrl.substring(0, baseUrl.lastIndexOf('/')+1);
if (url.startsWith("/")) {
return baseUrl.substring(0, baseUrl.indexOf('/'))+url;
} else {
return baseUrl+url;
}
}
}
private static String getPathComp(String absoluteUrl) {
if (absoluteUrl.startsWith(PROJECT_EULER)) {
return correctFileName(absoluteUrl.substring(PROJECT_EULER.length()+1));
} else {
return null;
}
}
}
代码可以从附件中得到,不提供下载后的题目,主要担心有版权限制。如果你需要这些题目并且不想运行程序,可以联系我。