文件
application.properties:
# 外部配置打开
# logging.config=./logback.xml
#业务日期
mock.date=2020-04-01
#模拟数据发送模式
mock.type=http
#http模式下,发送的地址
mock.url=http://localhost:8080/applog
#启动次数
mock.startup.count=10000
#设备最大值
mock.max.mid=50
#会员最大值
mock.max.uid=500
#商品最大值
mock.max.sku-id=10
#页面平均访问时间
mock.page.during-time-ms=20000
#错误概率 百分比
mock.error.rate=3
#每条日志发送延迟 ms
mock.log.sleep=10
#商品详情来源 用户查询,商品推广,智能推荐, 促销活动
mock.detail.source-type-rate=40:25:15:20
logback.xml(写日志文件):
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<property name="LOG_HOME" value="/applog/gmall2020" />
<appender name="console" class="ch.qos.logback.core.ConsoleAppender">
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<appender name="rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<file>${LOG_HOME}/app.log</file>
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/app.%d{yyyy-MM-dd}.log</fileNamePattern>
</rollingPolicy>
<encoder>
<pattern>%msg%n</pattern>
</encoder>
</appender>
<appender name="errorRollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>${LOG_HOME}/error.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>200mb</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
<filter class="ch.qos.logback.classic.filter.LevelFilter">
<level>ERROR</level>
<onMatch>ACCEPT</onMatch>
<onMismatch>DENY</onMismatch>
</filter>
</appender>
<appender name="async-rollingFile" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="rollingFile" />
<discardingThreshold>0</discardingThreshold>
<queueSize>512</queueSize>
</appender>
<appender name="dao-rollingFile" class="ch.qos.logback.core.rolling.RollingFileAppender">
<rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
<fileNamePattern>./logs/dao.%d{yyyy-MM-dd}.%i.log</fileNamePattern>
<timeBasedFileNamingAndTriggeringPolicy class="ch.qos.logback.core.rolling.SizeAndTimeBasedFNATP">
<maxFileSize>500mb</maxFileSize>
</timeBasedFileNamingAndTriggeringPolicy>
<maxHistory>10</maxHistory>
</rollingPolicy>
<encoder>
<pattern>%date{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n</pattern>
</encoder>
</appender>
<appender name="async-daoRollingFile" class="ch.qos.logback.classic.AsyncAppender">
<appender-ref ref="dao-rollingFile" />
<includeCallerData>true</includeCallerData>
</appender>
<!-- 将某一个包下日志单独打印日志 -->
<logger name="com.atgugu.gmall2020.mock.log.Mocker"
level="INFO" additivity="true">
<appender-ref ref="rollingFile" />
<appender-ref ref="console" />
</logger>
<root level="error" additivity="true">
<appender-ref ref="console" />
<!-- <appender-ref ref="async-rollingFile" /> -->
</root>
</configuration>
path.json:
[
{"path":["home","good_list","good_detail","cart","trade","payment"],"rate":20 },
{"path":["home","search","good_list","good_detail","login","good_detail","cart","trade","payment"],"rate":50 },
{"path":["home","mine","orders_unpaid","trade","payment"],"rate":10 },
{"path":["home","mine","orders_unpaid","good_detail","good_spec","comments","trade","payment"],"rate":10 },
{"path":["home","mine","orders_unpaid","good_detail","good_spec","comments","home"],"rate":10 }
]
拷贝到spark-log文件夹中
在linux得/home/atguigu/目录中新建文件夹spark-log,将文件发送到此目录中
更改application
先打开cmd输入ipconfig /all查看自己本机IP地址,一般windows都是.1结尾,只要查看网段即可
更改文件
创建日志目录并给权限
创建项目
创建GAV
小工具
web项目
对接kafka日志
project name
等待下载依赖
编写LoggerController类
运行
打印日志
将logback.xml拷贝到resources中 并修改日志保存路径
安装插件
安装插件lombok,安装完成后重启IDEA
编写代码
查看结果
将数据发送到Kafka
- 添加依赖
<dependency>
<groupId>com.alibaba</groupId>
<artifactId>fastjson</artifactId>
<version>1.2.56</version>
</dependency>
-
修改application.properties中kafka地址
-
编写代码
- 启动kafka
- 创建kafkatopic:若是已经存在就不需要了
kafka-topics.sh --create --topic GMALL_START --zookeeper hadoop102:2181,hadoop103:2181,hadoop104:2181 --partitions 12 --replication-factor 1
- 查看消费情况
/opt/module/kafka_2.11-0.11.0.2/bin/kafka-console-consumer.sh --bootstrap-server hadoop102:9092,hadoop103:9092,hadoop104:9092 --topic GMALL_START