java io .log_用于在 Azure 上运行的 Java 应用的 Logz.io 入门 | Microsoft Docs

教程:开始使用用于在 Azure 上运行的 Java 应用的 Logz.io 进行监视和日志记录Tutorial: Getting started with monitoring and logging using Logz.io for Java apps running on Azure

11/05/2019

本文内容

本教程介绍如何配置经典 Java 应用程序,以便将日志发送到 Logz.io 服务进行引入和分析。This tutorial shows you how to configure a classical Java application to send logs to the Logz.io service for ingestion and analysis. Logz.io 提供基于 Elasticsearch/Logstash/Kibana (ELK) 和 Grafana 的完整监视解决方案。Logz.io provides a full monitoring solution based on Elasticsearch/Logstash/Kibana (ELK) and Grafana.

本教程假设你使用 Log4J 或 Logback。The tutorial assumes you're using Log4J or Logback. 这些库是两个最广泛用于在 Java 中进行日志记录的库,因此本教程应该适用于在 Azure 上运行的大多数应用程序。These libraries are the two most widely used for logging in Java, so the tutorial should work for most applications running on Azure. 如果你已使用 Elastic Stack 监视 Java 应用程序,则可在本教程中了解如何通过重新配置将 Logz.io 终结点设为目标。If you're already using the Elastic stack to monitor your Java application, this tutorial shows you how to reconfigure to target the Logz.io endpoint.

本教程介绍以下操作:In this tutorial, you'll learn how to:

将日志从现有 Java 应用程序发送到 Logz.io。Send logs from an existing Java application to Logz.io.

将诊断日志和指标从 Azure 服务发送到 Logz.io。Send diagnostic logs and metrics from Azure services to Logz.io.

先决条件Prerequisites

Java Developer Kit, version 8 or greater

来自 Azure 市场的 Logz.io 帐户A Logz.io account from the Azure Marketplace

一个使用 Log4J 或 Logback 的现有 Java 应用程序An existing Java application that uses Log4J or Logback

将 Java 应用程序日志发送到 Logz.ioSend Java application logs to Logz.io

首先,我们将学习如何使用令牌来配置 Java 应用程序,使其能够访问 Logz.io 帐户。First, you'll learn how to configure your Java application with a token that gives it access to your Logz.io account.

获取 Logz.io 访问令牌Get your Logz.io access token

若要获取令牌,请登录到 Logz.io 帐户,选择右上/下角的齿轮图标,然后选择“设置”>“常规”。To get your token, log in to your Logz.io account, select the cog icon in the right-hand corner, then select Settings > General. 复制帐户设置中显示的访问令牌供以后使用。Copy the access token displayed in your account settings so you can use it later.

安装并配置用于 Log4J 或 Logback 的 Logz.io 库Install and configure the Logz.io library for Log4J or Logback

Logz.io Java 库在 Maven Central 上提供,因此可将其作为依赖项添加到应用配置中。The Logz.io Java library is available on Maven Central, so you can add it as a dependency to your app configuration. 请在 Maven Central 上检查版本号,并在以下配置设置中使用最新版本。Check the version number on Maven Central and use the latest version in the following configuration settings.

如果使用 Maven,请将以下依赖项添加到 pom.xml 文件:If you're using Maven, add the following dependency to your pom.xml file:

Log4J:Log4J:

io.logz.log4j2

logzio-log4j2-appender

1.0.11

Logback:Logback:

io.logz.logback

logzio-logback-appender

1.0.22

如果使用 Gradle,请将以下依赖项添加到生成脚本:If you're using Gradle, add the following dependency to your build script:

Log4J:Log4J:

implementation 'io.logz.log4j:logzio-log4j-appender:1.0.11'

Logback:Logback:

implementation 'io.logz.logback:logzio-logback-appender:1.0.22'

接下来,更新 Log4J 或 Logback 配置文件:Next, update your Log4J or Logback configuration file:

Log4J:Log4J:

java-application

https://:8071

Logback:Logback:

https://:8071

java-application

INFO

将 占位符替换为你的访问令牌,并将 占位符替换为区域的侦听器主机(例如,listener.logz.io)。Replace the placeholder with your access token and the placeholder with your region’s listener host (for example, listener.logz.io). 有关查找帐户区域的详细信息,请参阅 帐户区域。For more information on finding your account’s region, see Account region.

logzioType 元素是指 Elasticsearch 中的一个逻辑字段,用于将不同的文档互相分隔开。The logzioType element refers to a logical field in Elasticsearch that is used to separate different documents from one another. 必须正确配置此参数,以便充分利用 Logz.io。It's essential to configure this parameter properly to get the most out of Logz.io.

Logz.io“类型”是指日志格式(例如:Apache、NGinx、MySQL),而不是源(例如:server1、server2、server3)。A Logz.io "Type" is your log format (for example: Apache, NGinx, MySQL) and not your source (for example: server1, server2, server3). 在本教程中,我们将该类型称为 java-application,因为我们将配置 Java 应用程序,并且期望这些应用程序都会有相同的格式。For this tutorial, we are calling the type java-application because we are configuring Java applications, and we expect those applications will all have the same format.

至于高级用法,我们可以将 Java 应用程序分组为不同的类型,这些类型都有自己的特定日志格式(可以通过 Log4J 和 Logback 进行配置)。For advanced usage, you could group your Java applications into different types, which all have their own specific log format (configurable with Log4J and Logback). 例如,可以有“spring-boot-monolith”类型和“spring-boot-microservice”类型。For example, you could have a "spring-boot-monolith" type and a "spring-boot-microservice" type.

在 Logz.io 上测试配置和日志分析Test your configuration and log analysis on Logz.io

配置 Logz.io 库以后,应用程序就会直接将日志发送到其中。After the Logz.io library is configured, your application should now send logs directly to it. 若要测试是否一切正常,请转到 Logz.io 控制台,选择“Live Tail”选项卡,然后选择“运行”。To test that everything works correctly, go to the Logz.io console, select the Live tail tab, then select run. 应看到类似于以下内容的消息,表明连接正常:You should see a message similar to the following, telling you the connection is working:

Requesting Live Tail access...

Access granted. Opening connection...

Connected. Tailing...

接下来,启动应用程序或用其生成一些日志。Next, start your application, or use it in order to produce some logs. 日志应该直接显示在屏幕上。The logs should appear directly on your screen. 例如,下面是 Spring Boot 应用程序的头几条启动消息:For example, here are the first startup messages of a Spring Boot application:

2019-09-19 12:54:40.685Z Starting JavaApp on javaapp-default-9-5cfcb8797f-dfp46 with PID 1 (/workspace/BOOT-INF/classes started by cnb in /workspace)

2019-09-19 12:54:40.686Z The following profiles are active: prod

2019-09-19 12:54:42.052Z Bootstrapping Spring Data repositories in DEFAULT mode.

2019-09-19 12:54:42.169Z Finished Spring Data repository scanning in 103ms. Found 6 repository interfaces.

2019-09-19 12:54:43.426Z Bean 'spring.task.execution-org.springframework.boot.autoconfigure.task.TaskExecutionProperties' of type [org.springframework.boot.autoconfigure.task.TaskExecutionProperties] is not eligible for getting processed by all BeanPostProcessors (for example: not eligible for auto-proxying)

由于日志由 Logz.io 处理,因此可以利用该平台的所有服务。Now that your logs are processed by Logz.io, you can benefit from all the platform's services.

将 Azure 服务数据发送到 Logz.ioSend Azure services data to Logz.io

接下来介绍如何将日志和指标从 Azure 资源发送到 Logz.io。Next you'll learn how to send logs and metrics from your Azure resources to Logz.io.

部署模板Deploy the template

第一步是部署 Logz.io - Azure 集成模板。The first step is to deploy the Logz.io - Azure integration template. 集成基于现成的 Azure 部署模板,该模板可设置管道的所有必需构建基块。The integration is based on a ready-made Azure deployment template that sets up all the necessary building blocks of the pipeline. 该模板创建一个事件中心命名空间、一个事件中心、两个存储 blob,以及所有必需的正确权限和连接。The template creates an Event Hub namespace, an Event Hub, two storage blobs, and all the correct permissions and connections required. 通过自动部署设置的资源可以收集单个 Azure 区域的数据并将该数据传送到 Logz.io。The resources set up by the automated deployment can collect data for a single Azure region and ship that data to Logz.io.

请找到在存储库的自述文件的第一步中显示的“部署到 Azure”按钮。Find the Deploy to Azure button displayed in the first step of the repo’s readme.

选择“部署到 Azure”时, Azure 门户中的“自定义部署”页就会显示,其中有一个预先填充字段的列表。When you select Deploy to Azure, the Custom Deployment page in the Azure portal will appear with a list of pre-filled fields.

可以让大部分字段保留原样,但务必输入以下设置:You can leave most of the fields as-is but be sure to enter the following settings:

资源组:选择现有组或创建新组。Resource group: Either select an existing group or create a new one.

Logzio 日志/指标主机:输入 Logz.io 侦听器的 URL。Logzio Logs/Metrics Host: Enter the URL of the Logz.io listener. 如果不确定该 URL 是什么,请检查登录 URL。If you’re not sure what this URL is, check your login URL. 如果它是 app.logz.io,请使用 listener.logz.io(默认设置)。If it’s app.logz.io, use listener.logz.io (which is the default setting). 如果它是 app-eu.logz.io,请使用 listener-eu.logz.io。If it’s app-eu.logz.io, use listener-eu.logz.io.

Logzio 日志/指标令牌:输入要将 Azure 日志或指标传送到其中的 Logz.io 帐户的令牌。Logzio Logs/Metrics Token: Enter the token of the Logz.io account you want to ship Azure logs or metrics to. 可以在 Logz.io UI 中的帐户页上找到该令牌。You can find this token on the account page in the Logz.io UI.

同意页面底部的条款,然后选择“购买”。Agree to the terms at the bottom of the page, and select Purchase. 然后,Azure 就会部署该模板,可能需要一到两分钟。Azure will then deploy the template, which may take a minute or two. 最终会在门户顶部看到“部署成功”消息。You'll eventually see the "Deployment succeeded" message at the top of the portal.

可以通过访问定义的资源组来查看部署的资源。You can visit the defined resource group to review the deployed resources.

若要了解如何配置 logzio-azure-serverless 以将数据备份到 Azure Blob 存储,请参阅 Ship Azure activity logs(传送 Azure 活动日志)。To learn how to configure logzio-azure-serverless to back up data to Azure Blob Storage, see Ship Azure activity logs.

将 Azure 日志和指标流式传输到 Logz.ioStream Azure logs and metrics to Logz.io

由于已部署集成模板,因此需配置 Azure,以便将诊断数据流式传输到刚部署的事件中心。Now that you’ve deployed the integration template, you’ll need to configure Azure to stream diagnostic data to the Event Hub you just deployed. 当数据进入事件中心以后,函数应用随后会将该数据转发到 Logz.io。When data comes into the Event Hub, the function app will then forward that data to Logz.io.

在搜索栏中键入“诊断”,然后选择“诊断设置” 。In the search bar, type “Diagnostic”, then select Diagnostic settings.

从资源列表中选择一项资源,然后选择“添加诊断设置”,打开该资源的“诊断设置”面板 。Choose a resource from the list of resources, then select Add diagnostic setting to open the Diagnostics settings panel for that resource.

5790903e7c88929358bca84ba254f221.png

为诊断设置提供一个 名称。Give your diagnostic settings a Name.

选择“流式传输到事件中心” ,然后选择“配置”,以便打开“选择事件中心”面板。Select Stream to an event hub, then select Configure to open the Select Event Hub panel.

选择事件中心:Choose your Event Hub:

选择事件中心命名空间:选择以 Logzio 开头的命名空间(例如 LogzioNS6nvkqdcci10p)。Select event hub namespace: Choose the namespace that starts with Logzio (LogzioNS6nvkqdcci10p, for example).

选择事件中心名称:对于日志,请选择 insights-operational-logs;对于指标,请选择 insights-operational-metrics。Select event hub name: For logs choose insights-operational-logs and for metrics choose insights-operational-metrics.

选择事件中心策略名称:选择 LogzioSharedAccessKey。Select event hub policy name: Choose LogzioSharedAccessKey.

选择“确定”,返回到“诊断设置”面板。Select OK to return to the Diagnostics settings panel.

在“日志”部分,选择要流式传输的数据,然后选择“保存”。In the Log section, select the data you want to stream, then select Save.

所选数据现在会流式传输到事件中心。The selected data will now stream to the Event Hub.

可视化数据Visualize your data

接下来,为数据留出一些时间从你的系统传送到 Logz.io,然后打开 Kibana。Next, give your data some time to get from your system to Logz.io, and then open Kibana. 应该会看到数据(类型为 eventhub)填充仪表板。You should see data (with the type eventhub) filling up your dashboards. 有关如何创建仪表板的详细信息,请参阅 Creating the Perfect Kibana Dashboard(创建完美的 Kibana 仪表板)。For more information on how to create dashboards, see Creating the Perfect Kibana Dashboard.

可以在该仪表板的“发现”选项卡中查询特定数据,或者创建 Kibana 对象,以便在“可视化”选项卡中将数据可视化。From there, you can query for specific data in the Discover tab, or create Kibana objects to visualize your data in the Visualize tab.

清理资源Clean up resources

使用完本教程中创建的 Azure 资源后,可以用以下命令将其删除:When you're finished with the Azure resources you created in this tutorial, you can delete them using the following command:

az group delete --name

后续步骤Next steps

本教程介绍了如何配置 Java 应用程序和 Azure 服务,以便将日志和指标发送到 Logz.io。In this tutorial, you learned how to configure your Java application and Azure services to send logs and metrics to Logz.io.

接下来会详细介绍如何使用事件中心来监视应用程序:Next, learn more about using Event Hub to monitor your application:

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值