背景
使用Netty做HTTP文件服务可以有两种选择:1、使用HttpObjectAggregator聚合,这样只需对一个FullHttpRequest进行处理即可,实现上相对简单。2、不使用HttpObjectAggregator聚合,实现上相对复杂,但好处是在高并发下具备一定优势(当然对于分片数据chunkData的处理也要跟上)。本示例也是通过几次回炉阅读Netty官方示例而来,然后功能上实现了:REST URL解析、全量Chunked上传,Range断点Chunked上传、Base64上传、Chunked下载,为了让代码阅读起来相对简单清晰,我也是尽力而为,因为使用Netty去做HttpServer就相比其他框架注意点要多和实现复杂,更何况为了性能使用了Chunk分片读写、非Http聚合。
1. HttpFileServer
Server代码没有太多需要特别注意地方,不过以下几点最好需要知晓:
- NioEventLoopGroup默认线程数为CPU核数*2(一个接近最佳线程数的值)。
- 为了支持Chunk分片下载需要添加ChunkedWriteHandler(Http编解码器这都是必须的)。
- Netty职责链Pipeline执行顺序(ChannelInboundHandler/ChannelOutboundHandler下执行顺序的差别,这个一搜一大把)
需要补充说明的是:不使用HttpObjectAggregator聚合的原因是希望在并发的情况下对服务器的内存占用更加友好(当然需要ServerHandler的处理能力可以跟上)。既然要做一个有一定并发能力的文件服务,不炸内存、不炸CPU是需要特别主要和考虑的。
package cn.bossfriday.fileserver.http;
import cn.bossfriday.fileserver.common.conf.FileServerConfigManager;
import io.netty.bootstrap.ServerBootstrap;
import io.netty.channel.Channel;
import io.netty.channel.ChannelInitializer;
import io.netty.channel.ChannelOption;
import io.netty.channel.EventLoopGroup;
import io.netty.channel.nio.NioEventLoopGroup;
import io.netty.channel.socket.SocketChannel;
import io.netty.channel.socket.nio.NioServerSocketChannel;
import io.netty.handler.codec.http.HttpRequestDecoder;
import io.netty.handler.codec.http.HttpResponseEncoder;
import io.netty.handler.logging.LogLevel;
import io.netty.handler.logging.LoggingHandler;
import io.netty.handler.stream.ChunkedWriteHandler;
import lombok.extern.slf4j.Slf4j;
/**
* HttpFileServer
*
* @author chenx
*/
@Slf4j
public class HttpFileServer {
private HttpFileServer() {
}
/**
* start
*
* @throws InterruptedException
*/
public static void start() throws InterruptedException {
int port = FileServerConfigManager.getFileServerConfig().getHttpPort();
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.group(bossGroup, workerGroup);
b.channel(NioServerSocketChannel.class);
b.handler(new LoggingHandler(LogLevel.ERROR));
b.option(ChannelOption.SO_BACKLOG, 1024);
b.option(ChannelOption.SO_REUSEADDR, true);
b.option(ChannelOption.SO_RCVBUF, 1024 * 1024 * 10);
b.childHandler(new ChannelInitializer<SocketChannel>() {
@Override
protected void initChannel(SocketChannel socketChannel) {
socketChannel.pipeline().addLast(new HttpRequestDecoder());
socketChannel.pipeline().addLast(new HttpResponseEncoder());
socketChannel.pipeline().addLast(new ChunkedWriteHandler());
socketChannel.pipeline().addLast(new HttpFileServerHandler());
}
}
);
Channel ch = b.bind(port).sync().channel();
log.info("HttpFileServer start() done, port:" + port);
ch.closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
}
}
2. HttpFileServerHandler
ServerHandler处理为重点展示内容,以下几点予以特别说明:
- 对于SimpleChannelInboundHandler的使用需要清楚其范畴(Netty官方的示例中基本都用它),对于用Netty做文件服务而言就建议不要使用了(大家可以自行搜一下SimpleChannelInboundHandler使用中遇到的坑),因此本示例中使用ChannelInboundHandlerAdapter。
- 用Netty必须充分意识到ByteBuf使用不当直接内存泄露问题(Spring core io 中的DataBuffer同样也是,DataBuffer其实就是基于ByteBuf的封装),建议的方式是:通过try-finally 去显式释放(Netty官方也是这样强烈建议的)。发布前的调试时可以通过增加:-Dio.netty.leakDetectionLevel=PARANOID来保障对每次请求都做内存溢出检测(不加你就紧紧的盯着Log中leak字眼吧)。
- 不使用HttpObjectAggregator聚合,一个完整的Http请求需要进行1+N次读取:1次HttpRequest读取,N次thunkedHttpContent读取(thunkedContent的maxSize取决于Netty对应的内存分配机制,不特别指定就走默认)
- 由于未使用HttpObjectAggregator聚合,但是对不完整Base64信息进行解码可能失败,因此Base64上传实现时需要自行聚合完成后,再进行统一的Base64解码(Base64上传只用于小文件的上传,例如截图上传,所以这里进行聚合无伤大雅)。
- Netty做HttpServer只是其众多应用场景的一个,因此不会像SpringBoot一样去管各种Http解析(@RequestBody、@RequestParam、@PathVariable……),因此这些小东西需要自己考虑。本示例中的UrlParser为自行实现的解析,可以将HttpRequest中的path、query参数解析到Map中供后续使用,这里就不贴其代码,需要者自行从git里的完整代码去扒拉。
package cn.bossfriday.fileserver.http;
import cn.bossfriday.common.exception.BizException;
import cn.bossfriday.common.http.RangeParser;
import cn.bossfriday.common.http.UrlParser;
import cn.bossfriday.common.http.model.Range;
import cn.bossfriday.fileserver.actors.model.FileDownloadMsg;
import cn.bossfriday.fileserver.actors.model.WriteTmpFileMsg;
import cn.bossfriday.fileserver.common.FileServerHelper;
import cn.bossfriday.fileserver.common.enums.FileUploadType;
import cn.bossfriday.fileserver.context.FileTransactionContextManager;
import cn.bossfriday.fileserver.engine.StorageHandlerFactory;
import cn.bossfriday.fileserver.engine.StorageTracker;
import cn.bossfriday.fileserver.engine.core.IMetaDataHandler;
import cn.bossfriday.fileserver.engine.model.MetaDataIndex;
import io.netty.buffer.ByteBuf;
import io.netty.channel.ChannelHandlerContext;
import io.netty.channel.ChannelInboundHandlerAdapter;
import io.netty.handler.codec.http.*;
import io.netty.handler.codec.http.multipart.*;
import lombok.extern.slf4j.Slf4j;
import org.apache.commons.codec.binary.Base64;
import java.net.URI;
import java.net.URLDecoder;
import java.nio.charset.StandardCharsets;
import java.util.Map;
import static cn.bossfriday.fileserver.actors.model.FileDownloadMsg.FIRST_CHUNK_INDEX;
import static cn.bossfriday.fileserver.common.FileServerConst.*;
/**
* HttpFileServerHandler
* <p>
* 备注:
* 为了对服务端内存占用更加友好,不使用Http聚合(HttpObjectAggregator),
* 如果使用HttpObjectAggregator则只需对一个FullHttpRequest进行读取即可,处理上会简单很多。
* 不使用Http聚合一个完整的Http请求会进行1+N次读取:
* 1、一次HttpRequest读取;
* 2、N次HttpContent读取:后续处理中通过保障线程一致性去实现文件写入的零拷贝+顺序写
*
* @author chenx
*/
@Slf4j
public class HttpFileServerHandler extends ChannelInboundHandlerAdapter {
private HttpRequest request;
private HttpMethod httpMethod;
private Map<String, String> pathArgsMap;
private Map<String, String> queryArgsMap;
private HttpPostRequestDecoder decoder;
private String fileTransactionId;
private String storageNamespace;
private FileUploadType fileUploadType;
private byte[] base64AggregatedData;
private String metaDataIndexString;
private Range range;
private StringBuilder errorMsg = new StringBuilder();
private int version = DEFAULT_STORAGE_ENGINE_VERSION;
private long tempFilePartialDataOffset = 0;
private long fileTotalSize = 0;
private int base64AggregateIndex = 0;
private boolean isKeepAlive = false;
private static final HttpDataFactory HTTP_DATA_FACTORY = new DefaultHttpDataFactory(false);
private static final UrlParser UPLOAD_URL_PARSER = new UrlParser("/{" + URI_ARGS_NAME_UPLOAD_TYPE + "}/{" + URI_ARGS_NAME_ENGINE_VERSION + "}/{" + URI_ARGS_NAME_STORAGE_NAMESPACE + "}");
private static final UrlParser DOWNLOAD_URL_PARSER = new UrlParser("/" + URL_RESOURCE + "/{" + URI_ARGS_NAME_ENGINE_VERSION + "}/{" + URI_ARGS_NAME_META_DATA_INDEX_STRING + "}");
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
try {
if (msg instanceof HttpRequest) {
this.httpRequestRead(ctx, (HttpRequest) msg);
}
if (msg instanceof HttpContent) {
this.httpContentRead((HttpContent) msg);
}
} catch (Exception ex) {
log.error("channelRead error: " + this.fileTransactionId, ex);
this.errorMsg.append(ex.getMessage());
} finally {
if (msg instanceof LastHttpContent) {
this.lastHttpContentChannelRead();
}
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) {
log.error("exceptionCaught: " + this.fileTransactionId, cause);
if (ctx.channel().isActive()) {
ctx.channel().close();
}
// 异常情况临时文件删除
FileServerHelper.abnormallyDeleteTmpFile(this.fileTransactionId, this.version);
}
/**
* httpRequestRead
*
* @param ctx
* @param httpRequest
*/
private void httpRequestRead(ChannelHandlerContext ctx, HttpRequest httpRequest) {
try {
this.request = httpRequest;
this.isKeepAlive = HttpUtil.isKeepAlive(httpRequest);
this.fileTransactionId = FileServerHelper.getFileTransactionId(httpRequest);
FileTransactionContextManager.getInstance().registerContext(this.fileTransactionId, ctx, this.isKeepAlive, this.request.headers().get("USER-AGENT"));
if (HttpMethod.GET.equals(httpRequest.method())) {
this.parseUrl(DOWNLOAD_URL_PARSER);
this.httpMethod = HttpMethod.GET;
this.metaDataIndexString = this.getUrlArgValue(this.pathArgsMap, URI_ARGS_NAME_META_DATA_INDEX_STRING);
return;
}
if (HttpMethod.POST.equals(httpRequest.method())) {
this.parseUrl(UPLOAD_URL_PARSER);
this.httpMethod = HttpMethod.POST;
this.storageNamespace = this.getUrlArgValue(this.pathArgsMap, URI_ARGS_NAME_STORAGE_NAMESPACE);
this.fileUploadType = FileUploadType.getByName(this.getUrlArgValue(this.pathArgsMap, URI_ARGS_NAME_UPLOAD_TYPE));
if (this.fileUploadType == FileUploadType.FULL_UPLOAD) {
// 全量上传
this.fileTotalSize = Long.parseLong(FileServerHelper.getHeaderValue(this.request, HEADER_FILE_TOTAL_SIZE));
} else if (this.fileUploadType == FileUploadType.BASE_64_UPLOAD) {
// Base64上传
int contentLength = Integer.parseInt(FileServerHelper.getHeaderValue(this.request, String.valueOf(HttpHeaderNames.CONTENT_LENGTH)));
this.base64AggregatedData = new byte[contentLength];
} else if (this.fileUploadType == FileUploadType.RANGE_UPLOAD) {
// 断点上传
this.fileTotalSize = Long.parseLong(FileServerHelper.getHeaderValue(this.request, HEADER_FILE_TOTAL_SIZE));
this.range = RangeParser.parseAndGetFirstRange(FileServerHelper.getHeaderValue(this.request, HttpHeaderNames.RANGE.toString()));
}
return;
}
if (HttpMethod.DELETE.equals(httpRequest.method())) {
this.parseUrl(DOWNLOAD_URL_PARSER);
this.httpMethod = HttpMethod.DELETE;
this.metaDataIndexString = this.getUrlArgValue(this.pathArgsMap, URI_ARGS_NAME_META_DATA_INDEX_STRING);
return;
}
throw new BizException("unsupported HttpMethod!");
} catch (Exception ex) {
log.error("HttpRequest process error!", ex);
this.errorMsg.append(ex.getMessage());
} finally {
if (this.httpMethod.equals(HttpMethod.POST)) {
try {
this.decoder = new HttpPostRequestDecoder(HTTP_DATA_FACTORY, this.request);
} catch (HttpPostRequestDecoder.ErrorDataDecoderException e1) {
log.warn("getHttpDecoder Error:" + e1.getMessage());
}
}
}
}
/**
* httpContentRead
* Netty ByteBuf直接内存溢出问题需要重点关注,
* 调试时可以通过增加:-Dio.netty.leakDetectionLevel=PARANOID来保障对每次请求都做内存溢出检测
*
* @param httpContent
*/
private void httpContentRead(HttpContent httpContent) {
try {
if (this.httpMethod.equals(HttpMethod.POST)) {
if (this.fileUploadType == FileUploadType.BASE_64_UPLOAD) {
this.base64Upload(httpContent);
} else {
this.fileUpload(httpContent);
}
} else if (this.httpMethod.equals(HttpMethod.GET)) {
this.fileDownload(httpContent);
} else if (this.httpMethod.equals(HttpMethod.DELETE)) {
this.deleteFile(httpContent);
} else {
if (httpContent instanceof LastHttpContent) {
this.errorMsg.append("unsupported http method");
}
}
} finally {
if (httpContent.refCnt() > 0) {
httpContent.release();
}
}
}
/**
* lastHttpContentChannelRead
*/
private void lastHttpContentChannelRead() {
this.reset();
if (this.base64AggregatedData != null) {
this.base64AggregatedData = null;
}
if (this.hasError()) {
FileServerHelper.abnormallyDeleteTmpFile(this.fileTransactionId, this.version);
FileServerHelper.sendResponse(this.fileTransactionId, HttpResponseStatus.INTERNAL_SERVER_ERROR, this.errorMsg.toString());
}
}
/**
* fileUpload
*/
private void fileUpload(HttpContent httpContent) {
if (this.decoder == null) {
return;
}
try {
/**
* Initialized the internals from a new chunk
* content – the new received chunk
*/
this.decoder.offer(httpContent);
if (!this.hasError()) {
this.chunkedFileUpload();
}
} catch (Exception ex) {
log.error("HttpFileServerHandler.fileUpload() error!", ex);
this.errorMsg.append(ex.getMessage());
}
}
/**
* chunkedFileUpload(文件分片上传)
*/
private void chunkedFileUpload() {
try {
while (this.decoder.hasNext()) {
/**
* Returns the next available InterfaceHttpData or null if, at the time it is called,
* there is no more available InterfaceHttpData. A subsequent call to offer(httpChunk) could enable more data.
* Be sure to call ReferenceCounted.release() after you are done with processing to make sure to not leak any resources
*/
InterfaceHttpData data = this.decoder.next();
if (data instanceof FileUpload) {
this.currentPartialHttpDataProcess((FileUpload) data);
}
}
/**
* Returns the current InterfaceHttpData if currently in decoding status,
* meaning all data are not yet within, or null if there is no InterfaceHttpData currently in decoding status
* (either because none yet decoded or none currently partially decoded).
* Full decoded ones are accessible through hasNext() and next() methods.
*/
HttpData data = (HttpData) this.decoder.currentPartialHttpData();
if (data instanceof FileUpload) {
this.currentPartialHttpDataProcess((FileUpload) data);
}
} catch (HttpPostRequestDecoder.EndOfDataDecoderException endOfDataDecoderException) {
log.error("HttpFileServerHandler.chunkedFileUpload() EndOfDataDecoderException!");
} catch (Exception ex) {
log.error("HttpFileServerHandler.chunkedFileUpload() error!", ex);
this.errorMsg.append(ex.getMessage());
}
}
/**
* currentPartialHttpDataProcess
*
* @param currentPartialData
*/
private void currentPartialHttpDataProcess(FileUpload currentPartialData) {
byte[] partialData = null;
try {
ByteBuf byteBuf = currentPartialData.getByteBuf();
int readBytesCount = byteBuf.readableBytes();
partialData = new byte[readBytesCount];
byteBuf.readBytes(partialData);
WriteTmpFileMsg msg = new WriteTmpFileMsg();
msg.setStorageEngineVersion(this.version);
msg.setFileTransactionId(this.fileTransactionId);
msg.setStorageNamespace(this.storageNamespace);
msg.setKeepAlive(this.isKeepAlive);
msg.setFileName(URLDecoder.decode(currentPartialData.getFilename(), StandardCharsets.UTF_8.name()));
msg.setRange(this.range);
msg.setFileTotalSize(this.fileTotalSize);
msg.setOffset(this.tempFilePartialDataOffset);
msg.setData(partialData);
StorageTracker.getInstance().onPartialUploadDataReceived(msg);
} catch (Exception ex) {
log.error("HttpFileServerHandler.chunkedProcessFileUpload() error!", ex);
this.errorMsg.append(ex.getMessage());
} finally {
if (partialData != null) {
this.tempFilePartialDataOffset += partialData.length;
}
if (currentPartialData.refCnt() > 0) {
currentPartialData.release();
}
// just help GC
partialData = null;
}
}
/**
* base64Upload
* 由于对不完整Base64信息进行解码可能失败,因此Base64上传处理方式为聚合完成后进行Base64解码然后再进行全量上传
* 这是base64Upload只能用于例如截屏等小文件的上传场景的原因。
*
* @param httpContent
*/
private void base64Upload(HttpContent httpContent) {
ByteBuf byteBuf = null;
byte[] currentPartialData = null;
byte[] decodedFullData = null;
try {
// 数据聚合
byteBuf = httpContent.content();
int currentPartialDataLength = byteBuf.readableBytes();
currentPartialData = new byte[currentPartialDataLength];
byteBuf.readBytes(currentPartialData);
System.arraycopy(currentPartialData, 0, this.base64AggregatedData, this.base64AggregateIndex, currentPartialDataLength);
this.base64AggregateIndex += currentPartialDataLength;
// 聚合完成
if (httpContent instanceof LastHttpContent) {
decodedFullData = Base64.decodeBase64(this.base64AggregatedData);
WriteTmpFileMsg msg = new WriteTmpFileMsg();
msg.setStorageEngineVersion(this.version);
msg.setFileTransactionId(this.fileTransactionId);
msg.setStorageNamespace(this.storageNamespace);
msg.setKeepAlive(this.isKeepAlive);
msg.setFileName(this.fileTransactionId + "." + this.getUrlArgValue(this.queryArgsMap, URI_ARGS_NAME_EXT));
msg.setRange(this.range);
msg.setFileTotalSize(decodedFullData.length);
msg.setOffset(this.tempFilePartialDataOffset);
msg.setData(decodedFullData);
StorageTracker.getInstance().onPartialUploadDataReceived(msg);
}
} finally {
if (byteBuf != null) {
byteBuf.release();
}
// just help GC
currentPartialData = null;
decodedFullData = null;
}
}
/**
* fileDownload
*
* @param httpContent
*/
private void fileDownload(HttpContent httpContent) {
if (httpContent instanceof LastHttpContent && !this.hasError()) {
try {
IMetaDataHandler metaDataHandler = StorageHandlerFactory.getMetaDataHandler(this.version);
MetaDataIndex metaDataIndex = metaDataHandler.downloadUrlDecode(this.metaDataIndexString);
FileDownloadMsg fileDownloadMsg = FileDownloadMsg.builder()
.fileTransactionId(this.fileTransactionId)
.metaDataIndex(metaDataIndex)
.chunkIndex(FIRST_CHUNK_INDEX)
.build();
StorageTracker.getInstance().onDownloadRequestReceived(fileDownloadMsg);
} catch (Exception ex) {
log.error("HttpFileServerHandler.fileDownload() error!", ex);
throw new BizException("File download error!");
}
}
}
/**
* deleteFile
*
* @param httpContent
*/
private void deleteFile(HttpContent httpContent) {
if (httpContent instanceof LastHttpContent && !this.hasError()) {
// 文件删除逻辑实现...
}
}
/**
* reset
*/
private void reset() {
try {
this.request = null;
if (this.decoder != null) {
this.decoder.destroy();
this.decoder = null;
}
log.info("reset done: " + this.fileTransactionId);
} catch (Exception e) {
log.error("HttpFileServerHandler.reset() error!", e);
}
}
/**
* parseUrl
*
* @param urlParser
*/
private void parseUrl(UrlParser urlParser) {
try {
URI uri = new URI(this.request.uri());
this.pathArgsMap = urlParser.parsePath(uri);
this.queryArgsMap = urlParser.parseQuery(uri);
this.version = FileServerHelper.parserEngineVersionString(UrlParser.getArgsValue(this.pathArgsMap, URI_ARGS_NAME_ENGINE_VERSION));
} catch (Exception ex) {
log.error("HttpFileServerHandler.parseUrl() error!", ex);
this.errorMsg.append(ex.getMessage());
}
}
/**
* getUrlArgValue
*
* @param argMap
* @param key
* @return
*/
private String getUrlArgValue(Map<String, String> argMap, String key) {
try {
return UrlParser.getArgsValue(argMap, key);
} catch (Exception ex) {
log.error("HttpFileServerHandler.getUrlArgValue() error!", ex);
this.errorMsg.append(ex.getMessage());
}
return null;
}
/**
* hasError
*/
private boolean hasError() {
return this.errorMsg.length() > 0;
}
}
3. Client测试代码
3.1 完整代码
https://github.com/bossfriday/bossfriday-nubybear
备注:
关于该文件服务的其他设计初衷请参考:
一个用Java开发的分布式高性能文件服务:https://blog.csdn.net/camelials/article/details/124613041
3.2 启动依赖
服务依赖ZK(ZK地址在service-config.xml中进行配置),启动类为:FileServerBootstrap。
3.3 Client测试代码
参考完整代码中的:FileUploadTest,其中包括:normalUpload、download、base64Upload、rangeUpload:
/**
* normalUpload
*
* @throws Exception
*/
private static void normalUpload() throws Exception {
CloseableHttpClient httpClient = null;
HttpPost httpPost = null;
CloseableHttpResponse httpResponse = null;
File file = new File("files/UploadTest中文123.pdf");
try {
httpClient = HttpClients.createDefault();
httpPost = new HttpPost("http://127.0.0.1:18086/full/v1/normal");
httpPost.addHeader(HEADER_FILE_TOTAL_SIZE, String.valueOf(file.length()));
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
builder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);
builder.addBinaryBody("upfile", file, ContentType.create("application/x-zip-compressed"), URLEncoder.encode(file.getName(), "UTF-8"));
HttpEntity entity = builder.build();
httpPost.setEntity(entity);
// execute
httpResponse = httpClient.execute(httpPost);
System.out.println(EntityUtils.toString(httpResponse.getEntity()));
} finally {
if (httpPost != null) {
httpPost.releaseConnection();
}
if (httpResponse != null) {
try {
httpResponse.close();
} catch (Exception e) {
log.error("httpResponse close error!", e);
}
}
if (httpClient != null) {
try {
httpClient.close();
} catch (Exception e) {
log.error("httpClient close error!", e);
}
}
}
log.info("done");
}
/**
* rangeUpload
*
* @throws Exception
*/
private static void rangeUpload() throws Exception {
File localFile = new File("files/UploadTest中文123.pdf");
int chunkSize = 128 * 1024;
int fileTotalSize = (int) localFile.length();
int chunkCount = fileTotalSize % chunkSize == 0 ? (fileTotalSize / chunkSize) : (fileTotalSize / chunkSize + 1);
String fileTransactionId = UUID.randomUUID().toString();
// 测试一下httpClient连接复用情况下fileServer处理是否符合逾期(N个断点上传请求复用一个httpClient)
CloseableHttpClient httpClient = HttpClients.createDefault();
for (int i = 0; i < chunkCount; i++) {
int beginOffset = i * chunkSize;
int endOffset = (i + 1) * chunkSize - 1;
if (endOffset > fileTotalSize) {
endOffset = fileTotalSize - 1;
}
String range = "bytes=" + beginOffset + "-" + endOffset;
int rangeLength = endOffset - beginOffset + 1;
byte[] rangeData = new byte[rangeLength];
readFile(localFile, beginOffset, rangeData);
HttpPost httpPost = null;
CloseableHttpResponse httpResponse = null;
try {
httpPost = new HttpPost("http://127.0.0.1:18086/range/v1/normal");
httpPost.addHeader(HttpHeaderNames.CONNECTION.toString(), "Keep-Alive");
httpPost.addHeader(HttpHeaderNames.RANGE.toString(), range);
httpPost.addHeader(HEADER_FILE_TRANSACTION_ID, fileTransactionId);
httpPost.addHeader(HEADER_FILE_TOTAL_SIZE, String.valueOf(fileTotalSize));
MultipartEntityBuilder builder = MultipartEntityBuilder.create();
builder.setMode(HttpMultipartMode.BROWSER_COMPATIBLE);
builder.addBinaryBody("upfile", rangeData, ContentType.create("application/x-zip-compressed"), URLEncoder.encode(localFile.getName(), "UTF-8"));
HttpEntity entity = builder.build();
httpPost.setEntity(entity);
httpResponse = httpClient.execute(httpPost);
HttpEntity respEntity = httpResponse.getEntity();
if (respEntity == null) {
String responseRangeHeaderValue = httpResponse.getHeaders(HttpHeaderNames.CONTENT_RANGE.toString())[0].getValue();
System.out.println(httpResponse.getStatusLine().getStatusCode() + ":" + responseRangeHeaderValue);
} else {
System.out.println(EntityUtils.toString(respEntity));
}
} finally {
if (httpPost != null) {
httpPost.releaseConnection();
}
if (httpResponse != null) {
try {
httpResponse.close();
} catch (Exception e) {
log.error("httpResponse close error!", e);
}
}
}
}
try {
httpClient.close();
} catch (Exception e) {
e.printStackTrace();
}
}
/**
* base64Upload
*
* @throws Exception
*/
public static void base64Upload() throws Exception {
CloseableHttpClient httpClient = null;
HttpPost httpPost = null;
CloseableHttpResponse httpResponse = null;
Combo2<Integer, String> base64Combo = getBase64Combo();
String base64String = base64Combo.getV2();
try {
httpClient = HttpClients.createDefault();
httpPost = new HttpPost("http://127.0.0.1:18086/base64/v1/normal?ext=jpg");
httpPost.addHeader(HttpHeaderNames.CONNECTION.toString(), "Keep-Alive");
httpPost.addHeader(HttpHeaderNames.CONTENT_TYPE.toString(), "text/plain; charset=UTF-8");
httpPost.setEntity(new StringEntity(base64String, StandardCharsets.UTF_8));
httpResponse = httpClient.execute(httpPost);
System.out.println(EntityUtils.toString(httpResponse.getEntity()));
} finally {
if (httpPost != null) {
httpPost.releaseConnection();
}
if (httpResponse != null) {
try {
httpResponse.close();
} catch (Exception e) {
log.error("httpResponse close error!", e);
}
}
if (httpClient != null) {
try {
httpClient.close();
} catch (Exception e) {
log.error("httpClient close error!", e);
}
}
}
}
/**
* getBase64Combo
*
* @return
* @throws Exception
*/
private static Combo2<Integer, String> getBase64Combo() throws Exception {
File file = new File("files/Base64UploadTest.jpg");
try (FileInputStream in = new FileInputStream(file)) {
int size = in.available();
byte[] buffer = new byte[size];
in.read(buffer);
return new Combo2<>(size, Base64.encodeBase64String(buffer));
}
}
/**
* readFile
*
* @param file
* @param offset
* @param targetBytes
* @throws Exception
*/
public static void readFile(File file, long offset, byte[] targetBytes) throws Exception {
RandomAccessFile raf = null;
try {
raf = new RandomAccessFile(file, "r");
raf.seek(offset);
raf.readFully(targetBytes);
} finally {
try {
if (raf != null) {
raf.close();
}
} catch (Exception e) {
e.printStackTrace();
}
}
}