python kafka库选择_python kafka库的编码/格式化问题

经过一番研究后,我发现卡夫卡向消费者发送了一个额外的5字节头(一个0字节,一长包含模式注册表的模式ID) 。我设法通过简单地删除第一个字节来让消费者工作。

我在编写制作人时应该预先写上一个类似的头文件吗?

散发出来的异常下面:

[2016-09-14 13:32:48,684] ERROR Task hdfs-sink-0 threw an uncaught and unrecoverable exception (org.apache.kafka.connect.runtime.WorkerTask:142)

org.apache.kafka.connect.errors.DataException: Failed to deserialize data to Avro:

at io.confluent.connect.avro.AvroConverter.toConnectData(AvroConverter.java:109)

at org.apache.kafka.connect.runtime.WorkerSinkTask.convertMessages(WorkerSinkTask.java:357)

at org.apache.kafka.connect.runtime.WorkerSinkTask.poll(WorkerSinkTask.java:226)

at org.apache.kafka.connect.runtime.WorkerSinkTask.iteration(WorkerSinkTask.java:170)

at org.apache.kafka.connect.runtime.WorkerSinkTask.execute(WorkerSinkTask.java:142)

at org.apache.kafka.connect.runtime.WorkerTask.doRun(WorkerTask.java:140)

at org.apache.kafka.connect.runtime.WorkerTask.run(WorkerTask.java:175)

at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)

at java.util.concurrent.FutureTask.run(FutureTask.java:262)

at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)

at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)

at java.lang.Thread.run(Thread.java:745)

Caused by: org.apache.kafka.common.errors.SerializationException: Error deserializing Avro message for id -1

Caused by: org.apache.kafka.common.errors.SerializationException: Unknown magic byte!

我同时使用卡夫卡和python-卡夫卡的最新稳定版本。

编辑

消费者

from kafka import KafkaConsumer

import avro.io

import avro.schema

import io

import requests

import struct

# To consume messages

consumer = KafkaConsumer('hadoop_00',

group_id='my_group',

bootstrap_servers=['hadoop-master:9092'])

schema_path = "resources/f1.avsc"

for msg in consumer:

value = bytearray(msg.value)

schema_id = struct.unpack(">L", value[1:5])[0]

response = requests.get("http://hadoop-master:8081/schemas/ids/" + str(schema_id))

schema = response.json()["schema"]

schema = avro.schema.parse(schema)

bytes_reader = io.BytesIO(value[5:])

# bytes_reader = io.BytesIO(msg.value)

decoder = avro.io.BinaryDecoder(bytes_reader)

reader = avro.io.DatumReader(schema)

temp = reader.read(decoder)

print(temp)

生产者

from kafka import KafkaProducer

import avro.schema

import io

from avro.io import DatumWriter

producer = KafkaProducer(bootstrap_servers="hadoop-master")

# Kafka topic

topic = "hadoop_00"

# Path to user.avsc avro schema

schema_path = "resources/f1.avsc"

schema = avro.schema.parse(open(schema_path).read())

range = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10]

for i in range:

producer.send(topic, b'{"f1":"value_' + str(i))

+0

您的生产者和代码消费者请。这将有助于把所有的事情都集中在一起。 –

+0

@thiruvenkadam你去吧 –

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值