java爬虫 url 中文,java爬虫,解析URL

用到httpclient包跟jsoup包

要处理的URL:https://news.ecnu.edu.cn/cf/4c/c1833a118604/page.psp

爬取:c1833a118604——c1833a118704

首先对url做处理,获取URL:

public static int subUrl() {

int page;

String url = "https://news.ecnu.edu.cn/cf/4c/c1833a118604/page.psp";

String[] strs = url.split("/");

String str = strs[5];

String str1 = str.substring(0, str.indexOf("a"));

String str2 = str.substring(str1.length() + 1, str.length());

page = Integer.parseInt(str2);

return page;

}

然后抓取页面信息到本地:

public class HttpRequest {

public static void main(String[] args) throws Exception {

int page = SubUrl.subUrl();

for (int i = 0; i < 99; i++) {

String url = "https://news.ecnu.edu.cn/cf/4c/c1833a" + page + "/page.psp";

CloseableHttpClient httpClient = HttpClients.createDefault();

HttpGet httpGet = new HttpGet(url);

CloseableHttpResponse response = httpClient.execute(httpGet);

HttpEntity entity = response.getEntity();

String concent = EntityUtils.toString(entity, "utf-8");

response.close();

Document document = Jsoup.parse(concent);

Elements elements = document.getElementsByTag("html");

String string = elements.html();

// 新建文件保存

String fileName = "sunbeam//result" + page + ".html";

File file = new File(fileName);

File fileParent = file.getParentFile();

if (!fileParent.exists()) {

// 创建父目录文件

fileParent.mkdirs();

}

file.createNewFile();

// System.out.println(string);

OutputStreamWriter osw = new OutputStreamWriter(new FileOutputStream(file), "UTF-8");

// 写入内容

osw.write(string);

// 关闭写入流

osw.close();

page--;

}

}

}

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值