- 在查看完数据接口后网易新闻的链接地址返回的json数据是根据时间动态生成的
- 在项目查询列表上增加一个按钮修改解析json数据的方法完成数据的增加
在页面顶部增加获取数据的按钮
<td style="width:150px"></td>
<td style="margin-left:20px"><input type="button" style="width:60px" class="l-form-buttons" id="getJson" name="获取json数据" value="获取json数据"></td>
绑定点击事件
$("#getJson").click(function(){
var url ="/news/getJson";
$.post(url,{},function(res){
if(res.code==200){
$.ligerDialog.tip({icon: 'succeed', time: 1, content:res.msg});
}else{
$.ligerDialog.tip(res.msg);
}
});
});
增加获取数据的方法,为了保存生成的json文件,将获取数据保存至文件夹,在将文件名称返回,修改原解析json数据的方法,使用软编码的方式将文件存放的目录地址和请求链接地址配置在appilication.properties文件中
news.json.url=http://c.m.163.com/nc/article/headline/T1348647853363/0-100.html
news.json.dir=F:/springboot/springboot_solr/src/main/resources/json/
在newController中通过${}符和Value注解将application.properties文件中定义的值赋值给jsonUrl和fileDir变量
@Value("${news.json.url}")
private String jsonUrl;
@Value("${news.json.dir}")
private String fileDir;
@RequestMapping("getJson")
@ResponseBody
public ResultData getJson(){
try{
String fileName = GetNewsJson.getJsonData(jsonUrl,fileDir);
List<NewDoc> list =JsonUtils.importDataToSolr(fileName);
newDocSolr.addList(list);
new ResultData("200","","成功");
}catch(Exception e){
e.printStackTrace();
return new ResultData("-1","","失败");
}
return new ResultData("200","","成功");
}
新建获取json数据的类GetNewsJson
package com.gc.utils;
import java.io.BufferedReader;
import java.io.BufferedWriter;
import java.io.File;
import java.io.FileOutputStream;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.net.URL;
import java.net.URLConnection;
import java.util.Random;
import org.springframework.stereotype.Component;
@Component
public class GetNewsJson {
public static String getJsonData(String jsonUrl,String fileDir) throws Exception {
URL url = new URL(jsonUrl);
URLConnection connection = url.openConnection();
connection.addRequestProperty("User-Agent", "Mozilla/5.0 (Windows NT 6.1; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/71.0.3578.98 Safari/537.36");
connection.connect();
InputStream inputStream = connection.getInputStream();
BufferedReader re = new BufferedReader(new InputStreamReader(inputStream, "utf-8"));
String json="";
String line ="";
while((line=re.readLine())!=null){
json += line;
}
String fileName = fileDir+System.currentTimeMillis()+new Random().nextInt(1000)+".json";
System.out.println(json);
BufferedWriter wr = new BufferedWriter(new OutputStreamWriter(new FileOutputStream(new File(fileName)),"utf-8"));
wr.write(json);
wr.flush();
inputStream.close();
wr.close();
return fileName;
}
}
最后一步修改原解析json数据的方法增加入参文件名,并进行非空判断
public class JsonUtils {
public static List<NewDoc> importDataToSolr(String fileName){
InputStream in =null;
BufferedReader br=null;
List<NewDoc> list=null;
try {
if(StringUtils.isEmpty(fileName)){
in = new FileInputStream(new File("F:\\springboot\\springboot_solr\\src\\main\\resources\\news.json"));
}else{
in = new FileInputStream(new File(fileName));
}
br = new BufferedReader(new InputStreamReader(in));
String line;
StringBuffer strb = new StringBuffer();
while ((line = br.readLine()) != null) {
strb.append(line);
}
ObjectMapper mapper = new ObjectMapper();
mapper.configure(DeserializationFeature.FAIL_ON_UNKNOWN_PROPERTIES, false);
JsonNode jsonNode = mapper.readTree(strb.toString());
JsonNode root = jsonNode.get("T1348647853363");
if(root.isArray()){
list= mapper.readValue(root.toString(), new TypeReference<List<NewDoc>>() {});
}
if(list!=null){
for (NewDoc newDoc : list) {
newDoc.setId(String.valueOf(System.currentTimeMillis())+new Random().nextInt(1000));
System.out.println(newDoc.toString());
}
}
} catch (Exception e) {
e.printStackTrace();
}finally {
try {
if(br!=null){
br.close();
}
if(in!=null){
in.close();
}
} catch (IOException e) {
e.printStackTrace();
}
}
return list;
}
完成的效果图如下所示:
数据从原80增加到100,这个数据接口的数据是实时更新的,免去了获取数据的问题,考虑爬虫的话可以借助htppclient+jsoup进行页面元素的筛选,完成数据的收集。
下面将配置log4j.properties文件完成日志的打印,和记录。