kafka 反序列化异常:org.apache.kafka.common.errors.SerializationException

Spring boot集成kafka时候,能够正常发送消息,但是接受消息时,报错org.apache.kafka.common.errors.SerializationException,提示序列化错误(自定义的消息对象不在kafka信任的包路径下)


org.apache.kafka.common.errors.SerializationException: Error deserializing key/value for partition cxmBiStallCreditTopic-0 at offset 2. If needed, please seek past the record to continue consumption.
Caused by: java.lang.IllegalArgumentException: The class 'xx.xx' is not in the trusted packages: [java.util, java.lang]. If you believe this class is safe to deserialize, please provide its name. If the serialization is only done by a trusted source, you can also enable trust all (*).
	at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.getClassIdType(DefaultJackson2JavaTypeMapper.java:125) ~[spring-kafka-2.2.4.RELEASE.jar:2.2.4.RELEASE]
	at org.springframework.kafka.support.converter.DefaultJackson2JavaTypeMapper.toJavaType(DefaultJackson2JavaTypeMapper.java:99) ~[spring-kafka-2.2.4.RELEASE.jar:2.2.4.RELEASE]
	at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:342) ~[spring-kafka-2.2.4.RELEASE.jar:2.2.4.RELEASE]
	at org.apache.kafka.clients.consumer.internals.Fetcher.parseRecord(Fetcher.java:1041) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.Fetcher.access$3300(Fetcher.java:110) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.fetchRecords(Fetcher.java:1223) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.Fetcher$PartitionRecords.access$1400(Fetcher.java:1072) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.Fetcher.fetchRecords(Fetcher.java:562) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.internals.Fetcher.fetchedRecords(Fetcher.java:523) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.pollForFetches(KafkaConsumer.java:1230) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1187) ~[kafka-clients-2.0.1.jar:na]
	at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1154) ~[kafka-clients-2.0.1.jar:na]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:742) ~[spring-kafka-2.2.4.RELEASE.jar:2.2.4.RELEASE]
	at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:699) ~[spring-kafka-2.2.4.RELEASE.jar:2.2.4.RELEASE]
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511) [na:1.8.0_181]
	at java.util.concurrent.FutureTask.run(FutureTask.java:266) [na:1.8.0_181]
	at java.lang.Thread.run(Thread.java:748) [na:1.8.0_181]

源码解析:

JsonDeserializer的deserialize方法,

@Override
	public T deserialize(String topic, Headers headers, byte[] data) {
		if (data == null) {
			return null;
		}
		ObjectReader deserReader = null;
		if (this.typeMapper.getTypePrecedence().equals(TypePrecedence.TYPE_ID)) {
			JavaType javaType = this.typeMapper.toJavaType(headers);//获取Javatype类型
			if (javaType != null) {
				deserReader = this.objectMapper.readerFor(javaType);
			}
		}
		if (this.removeTypeHeaders) {
			this.typeMapper.removeHeaders(headers);
		}
		if (deserReader == null) {
			deserReader = this.reader;
		}
		Assert.state(deserReader != null, "No type information in headers and no default type provided");
		try {
			return deserReader.readValue(data);
		}
		catch (IOException e) {
			throw new SerializationException("Can't deserialize data [" + Arrays.toString(data) +
					"] from topic [" + topic + "]", e);
		}
	}

关键问题在于

JavaType javaType = this.typeMapper.toJavaType(headers);//获取Javatype类型

调用的是DefaultJackson2JavaTypeMapper的toJavaType方法

@Override
	public JavaType toJavaType(Headers headers) {
		String typeIdHeader = retrieveHeaderAsString(headers, getClassIdFieldName());

		if (typeIdHeader != null) {

			JavaType classType = getClassIdType(typeIdHeader);
			if (!classType.isContainerType() || classType.isArrayType()) {
				return classType;
			}

			JavaType contentClassType = getClassIdType(retrieveHeader(headers, getContentClassIdFieldName()));
			if (classType.getKeyType() == null) {
				return TypeFactory.defaultInstance()
						.constructCollectionLikeType(classType.getRawClass(), contentClassType);
			}

			JavaType keyClassType = getClassIdType(retrieveHeader(headers, getKeyClassIdFieldName()));
			return TypeFactory.defaultInstance()
					.constructMapLikeType(classType.getRawClass(), keyClassType, contentClassType);
		}

		return null;
	}

关键在于JavaType classType = getClassIdType(typeIdHeader);

private JavaType getClassIdType(String classId) {
		if (getIdClassMapping().containsKey(classId)) {
			return TypeFactory.defaultInstance().constructType(getIdClassMapping().get(classId));
		}
		else {
			try {
				if (!isTrustedPackage(classId)) {// 判断是否是可信的包
					throw new IllegalArgumentException("The class '" + classId
							+ "' is not in the trusted packages: "
							+ this.trustedPackages + ". "
							+ "If you believe this class is safe to deserialize, please provide its name. "
							+ "If the serialization is only done by a trusted source, you can also enable "
							+ "trust all (*).");
				}
				else {
					return TypeFactory.defaultInstance()
							.constructType(ClassUtils.forName(classId, getClassLoader()));
				}
			}
			catch (ClassNotFoundException e) {
				throw new MessageConversionException("failed to resolve class name. Class not found ["
						+ classId + "]", e);
			}
			catch (LinkageError e) {
				throw new MessageConversionException("failed to resolve class name. Linkage error ["
						+ classId + "]", e);
			}
		}
	}

在这个方法里面对包名进行处理

private boolean isTrustedPackage(String requestedType) {
		if (!this.trustedPackages.isEmpty()) {
			String packageName = ClassUtils.getPackageName(requestedType).replaceFirst("\\[L", "");
			for (String trustedPackage : this.trustedPackages) {
				if (packageName.equals(trustedPackage)) {
					return true;
				}
			}
			return false;
		}
		return true;
	}

报错的根本原因就是上面这个方法返回的是false

trustedPackages默认是java.util和java.lang

我们现在需要把自己的包名添加到trustedPackages里面,源码DefaultJackson2JavaTypeMapper提供了addTrustedPackages这个方法

	@Override
	public void addTrustedPackages(String... trustedPackages) {
		if (trustedPackages != null) {
			for (String whiteListClass : trustedPackages) {
				if ("*".equals(whiteListClass)) {
					this.trustedPackages.clear();
					break;
				}
				else {
					this.trustedPackages.add(whiteListClass);
				}
			}
		}
	}

这个方法在JsonDeserializer里面调用了

而我们看到这个的配置属性的key是

public static final String TRUSTED_PACKAGES = "spring.json.trusted.packages";

 

解决方法:将当前包添加到kafka信任的包路径下

spring:
  kafka:
    consumer:
      properties:
        spring:
          json:
            trusted:
              packages: xxx.xxxx

 

  • 6
    点赞
  • 8
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
`org.apache.kafka.common.errors.TimeoutException: Timeout expired`是一个错误消息,该消息是由Apache Kafka客户端库引发的。它表示生产者或消费者在与Kafka集群通信时发生了超时。 当一个Kafka客户端发送请求(例如发送消息或拉取消息)到Kafka集群时,超时异常可能会发生。这通常是由于以下原因之一引起的: 1. 网络连接问题:客户端无法与Kafka集群建立连接或断开了现有连接。这可能是由于网络中断、Kafka集群故障或配置错误引起的。 2. Kafka集群请求繁忙:当Kafka集群负载过重或持续处理大量请求时,可能会导致客户端请求超时。这可能是由于消息堆积、消费者速度不足或集群资源不足等原因引起的。 3. 客户端配置问题:客户端的配置可能未正确设置,例如请求超时时间设置得太短,导致请求超时。确保客户端配置与Kafka集群的要求相匹配。 解决此错误的方法包括: 1. 检查网络连接:确保客户端能够与Kafka集群建立稳定的网络连接。如果存在网络问题,解决网络故障或咨询网络管理员。 2. 调整请求超时时间:增加请求超时时间设置,以允许更长的等待时间。这样可以应对Kafka集群负载高峰期或处理大量请求的情况。 3. 调整Kafka集群配置:增加Kafka集群的资源,例如增加分区、增加代理节点或增加硬件配置等,以应对负载过重的场景。 4. 检查客户端配置:确保客户端的配置正确,并与Kafka集群的配置相匹配。 总之,`org.apache.kafka.common.errors.TimeoutException: Timeout expired`错误表示客户端请求与Kafka集群之间发生了超时。通过检查网络连接、调整请求超时时间、增加Kafka集群资源以及验证客户端配置,可以解决此错误并保持良好的Kafka通信。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值