使用ELK堆栈进行日志聚合

1.简介

随着微服务的使用,创建稳定的分布式应用程序和摆脱许多遗留问题变得很容易。

但是微服务的使用也带来了一些挑战, 分布式日志管理就是其中之一。

由于微服务是隔离的,因此它们不共享数据库和日志文件,因此实时搜索,分析和查看日志数据变得充满挑战。

这就是ELK堆栈的救援之处。

2. ELK

它是三个开源产品的集合:

  • 弹性搜索是基于JSON的NoSQL数据库
  • Logstash一个日志管道工具,可从各种来源获取输入,执行不同的转换并将数据导出到各种目标(此处为弹性搜索)
  • Kibana是可视化层,可在弹性搜索之上

请参考下面给出的架构:


ELK堆栈

日志存储从微服务中获取日志。

提取的日志将转换为JSON并提供给弹性搜索。

开发人员可以使用Kibana查看弹性搜索中存在的日志。

3.安装ELK

ELK基于Java。

在安装ELK之前,必须确保已JAVA_HOMEPATH ,并且已使用JDK 1.8完成安装。

3.1 Elasticsearch

  • 可以从下载页面下载最新版本的Elasticsearch,并且可以将其提取到任何文件夹中
  • 可以使用bin\elasticsearch.bat从命令提示符处执行它
  • 默认情况下,它将从http:// localhost:9200开始

3.2基巴纳

  • 可以从下载页面下载最新版本的Kibana,并且可以将其提取到任何文件夹中
  • 可以使用bin\kibana.bat在命令提示符下执行它
  • 成功启动后,Kibana将在默认端口5601上启动,并且Kibana UI将位于http:// localhost:5601

3.3 Logstash

  • 可以从下载页面下载最新版本的Logstash,并将其解压缩到任何文件夹中
  • 根据配置说明创建一个文件cst_logstash.conf
  • 可以使用bin/logstash -f cst_logstash.conf从命令提示符处执行以启动logstash

4.创建一个示例微服务组件

创建微服务是必需的,以便logstash可以指向API日志。

下面的清单显示了示例微服务的代码。

pom.xml

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
	xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
	<modelVersion>4.0.0</modelVersion>
	<groupId>com.xyz.app</groupId>
	<artifactId>ArtDemo1001_Rest_Controller_Full_Deployment_Logging</artifactId>
	<version>0.0.1-SNAPSHOT</version>
	<!-- Add Spring repositories -->
	<!-- (you don't need this if you are using a .RELEASE version) -->
	<repositories>
		<repository>
			<id>spring-snapshots</id>
			<url>http://repo.spring.io/snapshot</url>
			<snapshots>
				<enabled>true</enabled>
			</snapshots>
		</repository>
		<repository>
			<id>spring-milestones</id>
			<url>http://repo.spring.io/milestone</url>
		</repository>
	</repositories>
	<pluginRepositories>
		<pluginRepository>
			<id>spring-snapshots</id>
			<url>http://repo.spring.io/snapshot</url>
		</pluginRepository>
		<pluginRepository>
			<id>spring-milestones</id>
			<url>http://repo.spring.io/milestone</url>
		</pluginRepository>
	</pluginRepositories>
	<parent>
		<groupId>org.springframework.boot</groupId>
		<artifactId>spring-boot-starter-parent</artifactId>
		<version>1.5.2.RELEASE</version>
	</parent>
	<properties>
		<project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
		<project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>
		<java.version>1.8</java.version>
		<spring-cloud.version>Dalston.SR3</spring-cloud.version>
	</properties>
	<!-- Add typical dependencies for a web application -->
	<dependencies>
		<dependency>
			<groupId>org.springframework.boot</groupId>
			<artifactId>spring-boot-starter-web</artifactId>
		</dependency>
	</dependencies>

	<!-- Package as an executable jar -->
	<build>
		<plugins>
			<plugin>
				<groupId>org.springframework.boot</groupId>
				<artifactId>spring-boot-maven-plugin</artifactId>
			</plugin>
		</plugins>
	</build>

	<dependencyManagement>
		<dependencies>
			<dependency>
				<groupId>org.springframework.cloud</groupId>
				<artifactId>spring-cloud-dependencies</artifactId>
				<version>${spring-cloud.version}</version>
				<type>pom</type>
				<scope>import</scope>
			</dependency>
		</dependencies>
	</dependencyManagement>

</project>

上面的pom.xml代码已配置了基于Spring Boot的项目所需的依赖项。

EmployeeDAO.java

package com.xyz.app.dao;

import java.util.Collection;
import java.util.LinkedHashMap;
import java.util.Map;

import org.springframework.stereotype.Repository;

import com.xyz.app.model.Employee;

@Repository
public class EmployeeDAO {
	/**
	 * Map is used to Replace the Database 
	 * */
	static public Map<Integer,Employee> mapOfEmloyees = 
                 new LinkedHashMap<Integer,Employee>();
	static int count=10004;
	static
	{
		mapOfEmloyees.put(10001, new Employee("Jack",10001,12345.6,1001));
		mapOfEmloyees.put(10002, new Employee("Justin",10002,12355.6,1002));
		mapOfEmloyees.put(10003, new Employee("Eric",10003,12445.6,1003));
	}
	
	/**
	 * Returns all the Existing Employees
	 * */
	public Collection getAllEmployee(){
		return mapOfEmloyees.values();			
	}
	

	/**Get Employee details using EmployeeId .
	 * Returns an Employee object response with Data if Employee is Found
	 * Else returns a null
	 * */
	public Employee getEmployeeDetailsById(int id){
		return mapOfEmloyees.get(id);
	}
	/**Create Employee details.
	 * Returns auto-generated Id
	 * */
	public Integer addEmployee(Employee employee){
		count++;
		employee.setEmployeeId(count);
		mapOfEmloyees.put(count, employee);
		return count;
	}
	
	/**Update the Employee details,
	 * Receives the Employee Object and returns the updated Details  
	 * */
	public Employee updateEmployee (Employee employee){
		mapOfEmloyees.put(employee.getEmployeeId(), employee);
		return employee;
	}
	/**Delete the Employee details,
	 * Receives the EmployeeID and returns the deleted employee's Details  
	 * */
	public Employee removeEmployee (int id){
		Employee emp= mapOfEmloyees.remove(id);
		return emp;
	}
	
}

上面的代码表示应用程序的DAO层。

CRUD操作在包含Employee对象的Map集合上执行,以避免数据库依赖性并保持应用程序轻巧。

EmployeeController.java

package com.xyz.app.controller;

import java.util.Collection;

import org.apache.log4j.Logger;
import org.springframework.beans.factory.annotation.Autowired;
import org.springframework.http.HttpStatus;
import org.springframework.http.MediaType;
import org.springframework.http.ResponseEntity;
import org.springframework.web.bind.annotation.PathVariable;
import org.springframework.web.bind.annotation.RequestBody;
import org.springframework.web.bind.annotation.RequestMapping;
import org.springframework.web.bind.annotation.RequestMethod;
import org.springframework.web.bind.annotation.RestController;

import com.xyz.app.dao.EmployeeDAO;
import com.xyz.app.model.Employee;

@RestController
public class EmployeeController {
	
	@Autowired 
	private EmployeeDAO employeeDAO;
	
	
	public static Logger logger = Logger.getLogger(EmployeeController.class);
	
	/** Method is used to get all the employee details and return the same 
	 */ 
	@RequestMapping(value="emp/controller/getDetails",method=RequestMethod.GET,produces=MediaType.APPLICATION_JSON_VALUE)
	public ResponseEntity<Collection> getEmployeeDetails(){
		logger.info("From Producer method[getEmployeeDetails] start");
			logger.debug("From Producer method[getEmployeeDetails] start");
			Collection  listEmployee =employeeDAO.getAllEmployee();
			logger.debug("From Producer method[getEmployeeDetails] start");
		logger.info("From Producer method[getEmployeeDetails] end");
		return new ResponseEntity<Collection>(listEmployee, HttpStatus.OK);
	}
	/** Method finds an employee using employeeId and returns the found Employee 
		If no employee is not existing corresponding to the employeeId, 
		then null is returned with HttpStatus.INTERNAL_SERVER_ERROR as status
	 */ 
	@RequestMapping(value="emp/controller/getDetailsById/{id}",method=RequestMethod.GET,produces=MediaType.APPLICATION_JSON_VALUE)
	public ResponseEntity getEmployeeDetailByEmployeeId(@PathVariable("id") int myId){
		logger.info("From Producer method[getEmployeeDetailByEmployeeId] start");
		Employee employee = employeeDAO.getEmployeeDetailsById(myId);
		if(employee!=null)
		{
			logger.info("From Producer method[getEmployeeDetailByEmployeeId] end");
			return new ResponseEntity(employee,HttpStatus.OK);
		}
		else
		{
			logger.info("From Producer method[getEmployeeDetailByEmployeeId] end");
			return new ResponseEntity(HttpStatus.NOT_FOUND);
		}
		
	}
	
	/** Method creates an employee and returns the auto-generated employeeId */ 
	@RequestMapping(value="/emp/controller/addEmp",
			method=RequestMethod.POST,
			consumes=MediaType.APPLICATION_JSON_VALUE,
			produces=MediaType.TEXT_HTML_VALUE)
	public ResponseEntity addEmployee(@RequestBody Employee employee){
		logger.info("From Producer method[addEmployee] start");
			logger.debug("From Producer method[addEmployee] start");
			int empId= employeeDAO.addEmployee(employee);
			logger.debug("From Producer method[addEmployee] start");
		logger.info("From Producer method[addEmployee] end");
		return new ResponseEntity("Employee added successfully with id:"+empId,HttpStatus.CREATED);
	}

	/** Method updates an employee and returns the updated Employee 
 		If Employee to be updated is not existing, then null is returned with 
 		HttpStatus.INTERNAL_SERVER_ERROR as status
	 */ 
	@RequestMapping(value="/emp/controller/updateEmp",
			method=RequestMethod.PUT,
			consumes=MediaType.APPLICATION_JSON_VALUE,
			produces=MediaType.APPLICATION_JSON_VALUE)
	public ResponseEntity updateEmployee(@RequestBody Employee employee){
		logger.info("From Producer method[updateEmployee] start");
		if(employeeDAO.getEmployeeDetailsById(employee.getEmployeeId())==null){
			Employee employee2=null;
			return new ResponseEntity(employee2,HttpStatus.INTERNAL_SERVER_ERROR);
		}
		System.out.println(employee);
		employeeDAO.updateEmployee(employee);
		logger.info("From Producer method[updateEmployee] end");
		return new ResponseEntity(employee,HttpStatus.OK);
	}
	
	/** Method deletes an employee using employeeId and returns the deleted Employee 
	 	If Employee to be deleted is not existing, then null is returned with 
	 	HttpStatus.INTERNAL_SERVER_ERROR as status
	 */ 
	@RequestMapping(value="/emp/controller/deleteEmp/{id}",
			method=RequestMethod.DELETE,
			produces=MediaType.APPLICATION_JSON_VALUE)
	public ResponseEntity deleteEmployee(@PathVariable("id") int myId){
		logger.info("From Producer method[deleteEmployee] start");
		if(employeeDAO.getEmployeeDetailsById(myId)==null){
			Employee employee2=null;
			return new ResponseEntity(employee2,HttpStatus.INTERNAL_SERVER_ERROR);
		}
		Employee employee = employeeDAO.removeEmployee(myId);
		System.out.println("Removed: "+employee);
		logger.info("From Producer method[deleteEmployee] end");
		return new ResponseEntity(employee,HttpStatus.OK);
	}
}

上面的代码代表具有请求处理程序的应用程序的控制器层。

请求处理程序调用DAO层函数并执行CRUD操作。

application.properties

server.port = 8090
 logging.level.com.xyz.app.controller.EmployeeController=DEBUG
 #name of the log file to be created
 #same file will be given as input to logstash
 logging.file=app.log
 spring.application.name = producer

上面的代码表示为基于Spring Boot的应用程序配置的属性。

5. Logstash配置

如3.3节所述,需要为logstash创建配置文件。

logstash将使用此配置文件从微服务日志中获取输入。

日志被转换为JSON并馈入elasticsearch。

cst_logstash.conf

input {
  file {
    # If more than one log files from different microservices have to be tracked then a comma-separated list of log files can 
    # be provided
    path => ["PATH-TO-UPDATE/app.log"]
    codec => multiline {
      pattern => "^%{YEAR}-%{MONTHNUM}-%{MONTHDAY} %{TIME}.*"
      negate => "true"
      what => "previous"
    }
  }
}
output {
  stdout {
    codec => rubydebug
  }
  # Sending properly parsed log events to elasticsearch
  elasticsearch {
    hosts => ["localhost:9200"]
  }
}

上面的logstash配置文件侦听日志文件,并将日志消息推送到弹性搜索。

注意 :根据您的设置更改日志路径。

6.执行与输出

6.1为日志执行微服务

可以使用clean install spring-boot:run部署Spring Boot应用程序,并可以从浏览器或邮递员客户端访问以下URL: http:// localhost:8090 / emp / controller / getDetails

这将击中微服务并在微服务方面生成日志。

这些日志将由logstash读取,并推送到弹性搜索中,此外,可以使用Kibana进行后续步骤来查看这些日志。

6.2在Kibana上查看输出的步骤

Kibana Index Creation- 1
  • 单击下一步,将显示以下屏幕
Kibana Index Creation- 2

选择上面突出显示的选项,然后单击“创建索引模式”

  • 从左侧菜单中选择“发现”选项后,页面显示如下:
在Kibana-1上查看日志
  • 可以根据上面突出显示的属性来可视化和过滤日志。 将鼠标悬停在任何属性上后,将显示该属性的“添加”按钮。 在选择消息属性视图后,将显示如下所示:
在Kibana- 2上查看日志

7.参考

8.下载Eclipse项目

下载您可以在此处下载此示例的完整源代码: microservice

翻译自: https://www.javacodegeeks.com/2018/12/log-aggregation-using-elk-stack.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值