kakfa报错如下:
java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.RecordTooLargeException: The message is 12792083 bytes when serialized which is larger than the maximum request size you have configured with the max.request.size configuration.
原因是发送的消息过大,大于默认配置。其源码如下:
ProducerConfig.java
.define(MAX_REQUEST_SIZE_CONFIG,
Type.INT,
1 * 1024 * 1024,
atLeast(0),
Importance.MEDIUM,
MAX_REQUEST_SIZE_DOC)
可以看到默认是1M,只需要在配置kafka连接时,加入配置max.request.size即可,如下:
properties.put("bootstrap.servers", "172.16.40.4:9092");
properties.put("acks", "1");
properties.put("retries", 0);
properties.put("batch.size", 16384);
properties.put("linger.ms", 1);
properties.put("max.request.size", 12695150);
properties.put("buffer.memory", 33554432);
properties.put("key.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
properties.put("value.serializer", "org.apache.kafka.common.serialization.ByteArraySerializer");
但是需要注意的是,在这里配置的值应该小于服务端配置的最大值,否则报如下错误
org.apache.kafka.common.errors.RecordTooLargeException: The request included a message larger than the max message size the server will accept.
如果要修改服务端配置,则需要修改两个地方,首先是server.properties,加入
message.max.bytes=12695150
然后是producer.properties,加入
max.request.size=12695150
同时,消费端也要配置属性max.partition.fetch.bytes以接收大数据。