tika 是一个解析文档的工具箱,可以自己判别文档种类,再用合适的jar包 解析对应的文档
今天遇到一个需求,把网络上的文件内容解析成数据。
写爬虫,解析页面,对页面进行处理,解析出文件的url
接着就是文件下载到本地,对照tika 的demo 解析数据
可是文件下载到本地占用硬盘空间不说还会消耗磁盘io
既然需要数据只要在内存中处理就好了
使用
inputStream in =response.getEntity
用Tika 解析文件属性和正文,但是tika 分两步解析数据,任何一步都会改变inputStream
解决办法一:
inputStream的复制
ByteArrayOutputStream baos = new ByteArrayOutputStream();
// Fake code simulating the copy
// You can generally do better with nio if you need...
// And please, unlike me, do something about the Exceptions :D
byte[] buffer = new byte[1024];
int len;
while ((len = entity.read(buffer)) > -1 ) {
baos.write(buffer, 0, len);
}
baos.flush();
// Open new InputStreams using the recorded bytes
// Can be repeated as many times as you wish
InputStream is1 = new ByteArrayInputStream(baos.toByteArray());
InputStream is2 = new ByteArrayInputStream(baos.toByteArray());
InputStream is = null; is = getStream(); //obtain the stream CloseShieldInputStream csis = new CloseShieldInputStream(is); // call the bad function that does things it shouldn't badFunction(csis); // happiness follows: do something with the original input stream is.read();