网络爬虫+模拟浏览器(获取有权限网站资源):
获取URL
下载资源
分析
处理
public class http {
public static void main(String[]args) throws Exception
{
//http+s更安全
//URL.openStream()打开于URL的连接,并返回一个InputStream用于从连接中读取数据
//获取URL
URL url=new URL("https://www.jd.com");
//下载资源
InputStream is = url.openStream();
BufferedReader br=new BufferedReader(new InputStreamReader(is,"UTF-8"));;
String msg=null;
while((msg=br.readLine())!=null)
{
System.out.println(msg);
}
br.close();
}
}
获取有权限网络资源:
public class http {
public static void main(String[]args) throws Exception
{
//.openConnectio,,返回一个URLConnection实例表示由所引用的远程对象的连接URL
//URLConnection的子类有