java 使用lzo,如何使用java-lzo库解压缩lzo字节数组?

I'm trying to decompress compressed byte array using java-lzo library. I'm following this reference.

I added below maven dependency to pom.xml -

org.anarres.lzo

lzo-core

1.0.5

I created one method which accepts lzo compressed byte array and destination byte array length as a argument.

Program :

private byte[] decompress(byte[] src, int len) {

ByteArrayInputStream input = new ByteArrayInputStream(src);

ByteArrayOutputStream out = new ByteArrayOutputStream();

LzoAlgorithm algorithm = LzoAlgorithm.LZO1X;

lzo_uintp lzo = new lzo_uintp(len);

LzoDecompressor decompressor = LzoLibrary.getInstance().newDecompressor(algorithm, null);

LzoInputStream stream = new LzoInputStream(input, decompressor);

try {

int data = stream.read();

while (data != -1) {

out.write(data);

data = stream.read();

}

out.flush();

} catch (IOException ex) {

ex.printStackTrace();

}

return out.toByteArray();

}

I got stuck at one point because stream.read() always returns a "-1". I checked input array it is filled with data. Further I checked using stream.available() method but this method also returns always "0" in my case. But If I checked to InputStream like input.available() then the return value is length of array.

Error is same just like I said it is returning "-1" -

java.io.EOFException

at org.anarres.lzo.LzoInputStream.readBytes(LzoInputStream.java:183)

at org.anarres.lzo.LzoInputStream.readBlock(LzoInputStream.java:132)

at org.anarres.lzo.LzoInputStream.fill(LzoInputStream.java:119)

at org.anarres.lzo.LzoInputStream.read(LzoInputStream.java:90)

So, while initializing LzoInputStream I'm wrong or after that I need to do something? Any suggestions will be appreciated!

解决方案

For a .lzo file format, you should first read the header information and then pass it on to the LzoInputStream.

Then you can read the actual data till you reach eof.

I guess first 37 bytes are header related info and the compression algorithm information is available in 16th byte.

LZO header format is documented at https://gist.github.com/jledet/1333896

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
安装LZS和Hadoop的LZO压缩算法可以按照以下步骤进行: 1. 下载LZS和Hadoop的LZO压缩算法。 可以从以下网址下载对应版本的LZS和Hadoop的LZO压缩算法: - LZS:http://www.lzsupdates.com/download/lzs-1.6.tar.gz - Hadoop的LZO压缩算法:https://github.com/ning/jvm-compressor-snappy/releases/download/v0.1.0-native-hadoop1.0.4.1/libhadoop-gpl-compression-0.1.0-native-1.0.4.1.jar 2. 安装LZS。 解压下载的LZS文件,并按照以下步骤进行安装: ```bash tar -zxvf lzs-1.6.tar.gz cd lzs-1.6 ./configure make make install ``` 安装完成后,可以使用以下命令检查是否安装成功: ```bash lzs --version ``` 3. 安装Hadoop的LZO压缩算法。 将下载的`libhadoop-gpl-compression-0.1.0-native-1.0.4.1.jar`文件复制到Hadoop的`lib`目录下: ```bash cp libhadoop-gpl-compression-0.1.0-native-1.0.4.1.jar $HADOOP_HOME/lib/ ``` 然后,需要在Hadoop的配置文件中添加以下配置: ``` io.compression.codecs org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.SnappyCodec,com.hadoop.compression.lzo.LzoCodec io.compression.codec.lzo com.hadoop.compression.lzo.LzoCodec ``` 配置完成后,重启Hadoop集群,以确保配置生效。 注意:如果您使用的是CDH、HDP等Hadoop发行版,则可以跳过以上步骤,因为这些发行版已经默认包含了LZO压缩算法。 安装完成后,您就可以在Java的Spark程序中使用LZS压缩算法了。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值