1. Kafka与Odoo结合的场景

在某些业务场景下,我们希望将Odoo中的数据变更(如新订单、新用户注册等)实时发送到Kafka,以便其他业务系统自定义consumer进行处理。

2. 实现Kafka消费者(Consumer)

下面的代码展示了如何实现一个Kafka消费者,它会监听Kafka中的特定主题(如odoo_topic),并将消息处理后存储到Odoo中:

import threading
import time
from kafka import KafkaConsumer, KafkaProducer
import json
from odoo import api, registry, SUPERUSER_ID

class KafkaConsumerWrapper(threading.Thread):
    def __init__(self, db_name, env):
        threading.Thread.__init__(self)
        self.db_name = db_name
        self.env = env
        self.consumer = KafkaConsumer(
            'odoo_topic',
            bootstrap_servers=['localhost:9092'],
            enable_auto_commit=False,
            group_id='my_group',
            auto_offset_reset='earliest',
            value_deserializer=lambda m: json.loads(m.decode('utf-8'))
        )
        self.producer = KafkaProducer(
            bootstrap_servers=['localhost:9092'],
            value_serializer=lambda m: json.dumps(m).encode('utf-8')
        )
        self.max_retries = 3
        self.dlq_topic = 'dead_letter_queue'  # 死信队列的主题

    def run(self):
        with api.Environment.manage():
            registry_obj = registry(self.db_name)
            with registry_obj.cursor() as cr:
                env = api.Environment(cr, SUPERUSER_ID, {})
                for message in self.consumer:
                    data = message.value
                    self.process_message(env, data)
                    # 手动提交位移
                    self.consumer.commit()

    def process_message(self, env, data):
        try:
            success = False
            retry_count = 0
            while not success and retry_count < self.max_retries:
                try:
                    if data.get('name') == 'error':
                        raise Exception("Simulated processing error")
                    env
                    ['message.model'].create({'name': data.get('name'), 'status': 'Received'})
                    env.cr.commit()
                    self.consumer.commit()
                    success = True  # 成功处理后退出循环
                except Exception as e:
                    retry_count += 1
                    print(f"Error processing message: {e}")
                    partitions = self.consumer.assignment()
                    offsets = {tp: self.consumer.position(tp) for tp in partitions}
                    for tp, offset in offsets.items():
                        self.consumer.seek(tp, offset)
                    time.sleep(1)

            if not success:
                print(f"Failed to process message after {self.max_retries} retries, sending to DLQ")
                self.send_to_dlq(data)
        except KeyboardInterrupt:
            print("Consumer interrupted by user")

    def send_to_dlq(self, data):
        try:
            self.producer.send(self.dlq_topic, data)
            self.producer.flush()
        except Exception as e:
            print(f"Failed to send message to DLQ: {e}")

def start_kafka_consumer_service(env):
    db_name = env.cr.dbname
    consumer_service = KafkaConsumerWrapper(db_name, env)
    consumer_service.start()
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.
  • 19.
  • 20.
  • 21.
  • 22.
  • 23.
  • 24.
  • 25.
  • 26.
  • 27.
  • 28.
  • 29.
  • 30.
  • 31.
  • 32.
  • 33.
  • 34.
  • 35.
  • 36.
  • 37.
  • 38.
  • 39.
  • 40.
  • 41.
  • 42.
  • 43.
  • 44.
  • 45.
  • 46.
  • 47.
  • 48.
  • 49.
  • 50.
  • 51.
  • 52.
  • 53.
  • 54.
  • 55.
  • 56.
  • 57.
  • 58.
  • 59.
  • 60.
  • 61.
  • 62.
  • 63.
  • 64.
  • 65.
  • 66.
  • 67.
  • 68.
  • 69.
  • 70.
  • 71.
  • 72.
  • 73.
  • 74.
  • 75.
  • 76.

该消费者类通过继承threading.Thread实现了一个多线程的Kafka消费者,它会监听odoo_topic主题,并将消息内容存储到Odoo数据库中。在处理过程中,如果遇到错误,该消费者还会尝试重试,并在多次失败后将消息发送到死信队列(DLQ)。

3.如何启动消费者(consumer)

使用钩子函数,在odoo启动过程中启动kfaka消费者

def post_load():
    # 在Odoo加载后启动Kafka消费者服务
    with odoo.api.Environment.manage():
        registry = odoo.registry(config['db_name'])
        with registry.cursor() as cr:
            uid = odoo.SUPERUSER_ID
            env = odoo.api.Environment(cr, uid, {})
            start_kafka_consumer_service(env)
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
4. 实现Kafka生产者(Producer)

生产者负责将Odoo中的数据发送到Kafka。下面的代码展示了一个简单的Kafka生产者实现:

from kafka import KafkaProducer
import json

class KafkaProducerWrapper:
    def __init__(self, server='localhost:9092'):
        self.producer = KafkaProducer(
            bootstrap_servers=server,
            value_serializer=lambda v: json.dumps(v).encode('utf-8')
        )

    def send_message(self, topic, message):
        self.producer.send(topic, message)
        self.producer.flush()

def send_kafka_message(self):
    producer = KafkaProducerWrapper()
    for record in self:
        producer.send_message('odoo_topic', {'name': record.name})
  • 1.
  • 2.
  • 3.
  • 4.
  • 5.
  • 6.
  • 7.
  • 8.
  • 9.
  • 10.
  • 11.
  • 12.
  • 13.
  • 14.
  • 15.
  • 16.
  • 17.
  • 18.

这个生产者类提供了一个简单的方法send_message,可以将数据发送到指定的Kafka主题。在实际应用中,我们可以在Odoo的业务逻辑中调用这个方法,将重要的业务事件推送到Kafka,供其他系统消费。