简介
安装JDK
安装DOCKER
curl -sSL https://get.daocloud.io/docker | sh
docker安装ES
安装kibana
安装Logstash
docker安装Kafka、Zookeeper
EKL+kafka实现分布式日志
EKL和kafka安装好后,需要做logstash和kafka的整合。logstash从kafka读取消息,记录在ES中logstash整合kafka
SpringBoot整合ES
源码下载链接:https://pan.baidu.com/s/13PuPfOSrUkFRbUn0hsYBTw
提取码:8b7a
1、用IDEA创建一个maven项目
2、添加pom.xml
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.0.0.RELEASE</version>
<relativePath /> <!-- lookup parent from repository -->
</parent>
<dependencies>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-web</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-elasticsearch</artifactId>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-apt</artifactId>
</dependency>
<dependency>
<groupId>com.querydsl</groupId>
<artifactId>querydsl-jpa</artifactId>
</dependency>
<dependency>
<groupId>org.projectlombok</groupId>
<artifactId>lombok</artifactId>
</dependency>
<!--添加查询-->
<dependency>
<groupId>ma.glasnost.orika</groupId>
<artifactId>orika-core</artifactId>
<version>1.5.2</version>
</dependency>
<!-- 添加kafka -->
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.47</version>
</dependency>
<!--aspect 添加包-->
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-aop</artifactId>
</dependency>
</dependencies>
3、创建application.yml文件
这里的IP修改自己的IP
spring:
data:
elasticsearch:
####集群名称
cluster-name: elasticsearch-cluster
####地址
cluster-nodes: 121.40.65.163:9300
kafka:
# kafka服务器地址(可以多个)
bootstrap-servers: 121.40.65.163:9092
4、创建AOP类,将要加日志的方法拦截,发到KAFKA
@Aspect
@Component
public class AopLogAspect {
@Autowired
private KafkaSender<JSONObject> kafkaSender;
// 申明一个切点 里面是 execution表达式
@Pointcut("execution(* com.lxl.controller.*.*(..))")
private void serviceAspect() {
}
// 请求method前打印内容
@Before(value = "serviceAspect()")
public void methodBefore(JoinPoint joinPoint) {
ServletRequestAttributes requestAttributes = (ServletRequestAttributes) RequestContextHolder
.getRequestAttributes();
HttpServletRequest request = requestAttributes.getRequest();
// // 打印请求内容
// log.info("===============请求内容===============");
// log.info("请求地址:" + request.getRequestURL().toString());
// log.info("请求方式:" + request.getMethod());
// log.info("请求类方法:" + joinPoint.getSignature());
// log.info("请求类方法参数:" + Arrays.toString(joinPoint.getArgs()));
// log.info("===============请求内容===============");
JSONObject jsonObject = new JSONObject();
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");// 设置日期格式
jsonObject.put("request_time", df.format(new Date()));
jsonObject.put("request_url", request.getRequestURL().toString());
jsonObject.put("request_method", request.getMethod());
jsonObject.put("signature", joinPoint.getSignature());
jsonObject.put("request_args", Arrays.toString(joinPoint.getArgs()));
JSONObject requestJsonObject = new JSONObject();
requestJsonObject.put("request", jsonObject);
kafkaSender.send(requestJsonObject);
}
// 在方法执行完结后打印返回内容
@AfterReturning(returning = "o", pointcut = "serviceAspect()")
public void methodAfterReturing(Object o) {
// log.info("--------------返回内容----------------");
// log.info("Response内容:" + gson.toJson(o));
// log.info("--------------返回内容----------------");
JSONObject respJSONObject = new JSONObject();
JSONObject jsonObject = new JSONObject();
SimpleDateFormat df = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");// 设置日期格式
jsonObject.put("response_time", df.format(new Date()));
jsonObject.put("response_content", JSONObject.toJSONString(o));
respJSONObject.put("response", jsonObject);
kafkaSender.send(respJSONObject);
}
}
5、添加kafka template类
@Component
@Slf4j
public class KafkaSender<T> {
@Autowired
private KafkaTemplate<String, Object> kafkaTemplate;
/**
* kafka 发送消息
*
* @param obj
* 消息对象
*/
public void send(T obj) {
String jsonObj = JSON.toJSONString(obj);
log.info("------------ message = {}", jsonObj);
// 发送消息
ListenableFuture<SendResult<String, Object>> future = kafkaTemplate.send("my_log", jsonObj);
future.addCallback(new ListenableFutureCallback<SendResult<String, Object>>() {
@Override
public void onFailure(Throwable throwable) {
log.info("Produce: The message failed to be sent:" + throwable.getMessage());
}
@Override
public void onSuccess(SendResult<String, Object> stringObjectSendResult) {
// TODO 业务处理
log.info("Produce: The message was sent successfully:");
log.info("Produce: _+_+_+_+_+_+_+ result: " + stringObjectSendResult.toString());
}
});
}
}
这里我们测试下,模拟调用方法,然后AOP拦截,将日志会发到kafka
请求后,我们可以看到logstash的日志
说明已经接收到日志信息。
我们查询一下kibana,看看ES里是否有数据
整合完成
日志文件内容格式,traceId
http://121.40.65.163:9200/
http://121.40.65.163:5601/app/kibana