教程:使用Azure功能来处理来自Azure事件中心的实时数据并保存到Azure Cosmos DB

Øne of the previous blogs covered some of the concepts behind how Azure Event Hubs supports multiple protocols for data exchange. In this blog, we will see it in action using an example. With the help of a sample app, you will see how to combine real-time data ingestion component with a Serverless processing layer.

该示例应用程序具有以下组件:

To follow along and deploy this solution to Azure, you are going to need a Microsoft Azure account. You can grab one for free if you don't have it already!

Application components

让我们看一下应用程序的各个组件

As always, the code is available on GitHub

Producer component

This is pretty straightforward - it is a Go app which uses the Sarama Kafka client to send (simulated) "orders" to Azure Event Hubs (Kafka topic). It is available in the form of a Docker image for ease of use (details in next section)

这是相关的代码片段:

order := Order{OrderID: "order-1234", CustomerID: "customer-1234", Product: "product-1234"}

b, err := json.Marshal(order)

msg := &sarama.ProducerMessage{Topic: eventHubsTopic, Key: sarama.StringEncoder(oid), Value: sarama.ByteEncoder(b)}
producer.SendMessage(msg)

A lot of the details have been omitted (from the above snippet) - you can grok through the full code here. To summarize, an Order is created, converted (marshaled) into JSON (bytes) and sent to Event Hubs Kafka endpoint.

Serverless component

The Serverless part is a Java Azure Function. It leverages the following capabilities:

The Ťrigger allows the Azure Functions logic to get invoked whenever an order event is sent to Azure Event Hubs. The Output Binding takes care of all the heavy lifting such as establishing database connection, scaling, concurrency, etc. and all that's left for us to build is the business logic, which in this case has been kept pretty simple - on receiving the order data from Azure Event Hubs, the function enriches it with additional info (customer and product name in this case), and persists it in an Azure Cosmos DB container.

You can check the OrderProcessor code on Github, but here is the gist:

@FunctionName("storeOrders")
public void storeOrders(

  @EventHubTrigger(name = "orders", eventHubName = "", connection = 
  "EventHubConnectionString", cardinality = Cardinality.ONE) 
  OrderEvent orderEvent,

  @CosmosDBOutput(name = "databaseOutput", databaseName = "AppStore", 
  collectionName = "orders", connectionStringSetting = 
  "CosmosDBConnectionString") 
  OutputBinding<Order> output,

  final ExecutionContext context) {
....

Order order = new Order(orderEvent.getOrderId(),Data.CUSTOMER_DATA.get(orderEvent.getCustomerId()), orderEvent.getCustomerId(),Data.PRODUCT_DATA.get(orderEvent.getProduct());
output.setValue(order);

....
}

The storeOrders method is annotated with @FunctionName and it receives data from Event Hubs in the form of an OrderEvent object. Thanks to the @EventHubTrigger annotation, the platform that takes care of converting the Event Hub payload to a Java POJO (of the type OrderEvent) and routing it correctly. The connection = "EventHubConnectionString" part specifies that the Event Hubs connection string is available in the function configuration/settings named EventHubConnectionString

The @CosmosDBOutput annotation is used to persist data in Azure Cosmos DB. It contains the Cosmos DB database and container name, along with the connection string which will be picked up from the CosmosDBConnectionString configuration parameter in the function. The POJO (Order in this case) is persisted to Cosmos DB with a single setValue method call on the OutputBinding object - the platform makes it really easy, but there is a lot going on behind the scenes!

让我们进行调整,学习如何将解决方案部署到Azure

Pre-requisites

Notes

  • Ideally, all the components (Event Hubs, Cosmos DB, Storage, and Azure Function) should be the same region
  • It is recommended to create a new resource group to group these services so that it is easy to locate and delete them easily

Deploy the Order Processor function

This example makes use of the Azure Functions Maven plugin for deployment. First, update the pom.xml to add the required configuration.

更换<appSettings>部分并替换的值AzureWebJobs存储,EventHubConnectionString和CosmosDBConnectionString参数

Use the Azure CLI to easily fetch the required details

为了组态部分,更新以下内容:

  • resourceGroup: the resource group to which you want to deploy the function to
  • region: Azure region to which you want to deploy the function to (get the list of locations)

要进行部署,您需要两个命令:

  • mvn清洁包装-准备部署工件mvn azure-functions:部署-部署到Azure

You can confirm using Azure CLI az functionapp list --query "[?name=='orders-processor']" or the portal

Run Event Hubs producer

设置环境变量:

export EVENTHUBS_BROKER=<namespace>.servicebus.windows.net:9093
export EVENTHUBS_TOPIC=<event-hub-name>
export EVENTHUBS_CONNECTION_STRING="Endpoint=sb://<namespace>.servicebus.windows.net/;SharedAccessKeyName=RootManageSharedAccessKey;SharedAccessKey=<primary_key>"

运行Docker映像

docker run -e EVENTHUBS_BROKER=$EVENTHUBS_BROKER -e EVENTHUBS_TOPIC=$EVENTHUBS_TOPIC -e EVENTHUBS_CONNECTION_STRING=$EVENTHUBS_CONNECTION_STRING abhirockzz/eventhubs-kafka-producer
按Ctrl + C停止产生事件

Confirm the results in Azure Cosmos DB

You can use the Azure Cosmos DB data explorer (web interface) to check the items in the container. You should see results similar to this:

Alt Text

Clean up

Assuming you placed all the services in the same resource group, you can delete them using a single command:

export RESOURCE_GROUP_NAME=<enter the name>
az group delete --name $RESOURCE_GROUP_NAME --no-wait

Thanks for reading 🙂 Happy to get your feedback via Ťwitter or just drop a comment 🙏🏻 Stay tuned for more!

from: https://dev.to//azure/tutorial-use-azure-functions-to-process-real-time-data-from-azure-event-hubs-and-persist-to-azure-cosmos-db-2co8

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值