前言
- 本篇是在上一篇nginx打点配置后的基础上制作。
fee
- 前面配置了sentry,这sentry相当大,fee是开源的轻量级监控系统。
- 地址https://github.com/LianjiaTech/fee
- 先克隆此仓库
- 修改sdk下打点服务器地址:
const feeTarget = 'https://test.com/dig' // 打点服务器,或Nginx地址
- 然后在sdk里build下:
cd fee/sdk
npm i --registry=https://registry.npm.taobao.org
npm run build
- 完毕后建个建个demo,把build的sdk弄出来改名sdk.js
- 整个html模拟项目,访问后会发送给打点nginx。
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>SDK demo</title>
</head>
<body>
<div id="test">测试SDK</div>
<script src="./sdk.js"></script>
<script>
// 模拟uuid
if (!localStorage.getItem("custom_UUID")) {
const customUUID = Math.random().toString().slice(-6);
localStorage.setItem("custom_UUID", customUUID);
}
window.dt &&
dt.set({
pid: "project_test_id", // [必填]项目id, 由灯塔项目组统一分配
uuid: localStorage.getItem("custom_UUID") || "", // [可选]设备唯一id, 用于计算uv数&设备分布. 一般在cookie中可以取到, 没有uuid可用设备mac/idfa/imei替代. 或者在storage的key中存入随机数字, 模拟设备唯一id.
ucid: "", // [可选]用户ucid, 用于发生异常时追踪用户信息, 一般在cookie中可以取到, 没有可传空字符串
is_test: false, // 是否为测试数据, 默认为false(测试模式下打点数据仅供浏览, 不会展示在系统中)
record: {
time_on_page: true, // 是否监控用户在线时长数据, 默认为true
performance: true, // 是否监控页面载入性能, 默认为true
js_error: true, // 是否监控页面报错信息, 默认为true
// 配置需要监控的页面报错类别, 仅在js_error为true时生效, 默认均为true(可以将配置改为false, 以屏蔽不需要上报的错误类别)
js_error_report_config: {
ERROR_RUNTIME: true, // js运行时报错
ERROR_SCRIPT: true, // js资源加载失败
ERROR_STYLE: true, // css资源加载失败
ERROR_IMAGE: true, // 图片资源加载失败
ERROR_AUDIO: true, // 音频资源加载失败
ERROR_VIDEO: true, // 视频资源加载失败
ERROR_CONSOLE: true, // vue运行时报错
ERROR_TRY_CATCH: true, // 未catch错误
// 自定义检测函数, 上报前最后判断是否需要报告该错误
// 回调函数说明
// 传入参数 =>
// desc: 字符串, 错误描述
// stack: 字符串, 错误堆栈信息
// 返回值 =>
// true : 上报打点请求
// false : 不需要上报
checkErrrorNeedReport: function (desc, stack) {
return true;
},
},
},
// 业务方的js版本号, 会随着打点数据一起上传, 方便区分数据来源
// 可以不填, 默认为1.0.0
version: "1.0.0",
// 对于如同
// test.com/detail/1.html
// test.com/detail/2.html
// test.com/detail/3.html
// ...
// 这种页面来说, 虽然url不同, 但他们本质上是同一个页面
// 因此需要业务方传入一个处理函数, 根据当前url解析出真实的页面类型(例如: 二手房列表/经纪人详情页), 以便灯塔系统对错误来源进行分类
// 回调函数说明
// 传入参数 => window.location
// 返回值 => 对应的的页面类型(50字以内, 建议返回汉字, 方便查看), 默认是返回当前页面的url
getPageType: function (location) {
return `${location.host}${location.pathname}`;
},
});
// 通过点击手动创建错误
const testDOM = document.getElementById("test");
testDOM.addEventListener("click", function () {
console.log(window.a.b);
});
</script>
</body>
</html>
kafka
- https://kafka.apache.org/
- 入门文档
- 入门文档2
- 下载zookeeper:
cd /home/admin
# java
yum install java-11-openjdk-devel -y
# zookeeper
wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.6.2/apache-zookeeper-3.6.2-bin.tar.gz
- 解压
tar -zxvf apache-zookeeper-3.6.2-bin.tar.gz
cd apache-zookeeper-3.6.2-bin/conf
- 运行
./bin/zkServer.sh start
- 查看是否成功
netstat -tunlp | egrep 2181
- 配置kafka
cd /home/admin
wget http://mirror.bit.edu.cn/apache/kafka/2.6.0/kafka_2.13-2.6.0.tgz
- 解压
tar -zxvf kafka_2.13-2.6.0.tgz
cd kafka_2.13-2.6.0/config
- 配置listener 为本机
# vim server.properties
listeners=PLAINTEXT://192.168.199.101:9092
advertised.listeners=PLAINTEXT://192.168.199.101:9092
- 开启服务
cd ../bin
./kafka-server-start.sh ./../config/server.properties 1>/dev/null 2>&1 &
- 开始消费
./kafka-console-consumer.sh --bootstrap-server 192.168.199.101:9092 --topic fee --from-beginning
- 开另一个窗口生产:
./kafka-console-producer.sh --broker-list 192.168.199.101:9092 --topic fee
> A awesome message from me!
- 看是否能消费到消息,能就Ok。
Filebeat
- 官网
- filebeat会作为producer将nginx产生的日志作为消息发送到kafka
- 下载
sudo rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
- 在/etc/yum.repos.d/目录下,创建一个elastic.repo文件,并且添加以下行
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
- 安装
sudo yum install filebeat -y
- 安装目录为:/usr/share/filebeat
配置文件所在目录为:/etc/filebeat/filebeat.yml
vim /etc/filebeat/filebeat.yml
- 修改里面ip
###################### Filebeat Configuration Example #########################
# This file is an example configuration file highlighting only the most common
# options. The filebeat.reference.yml file from the same directory contains all the
# supported options with more comments. You can use it as a reference.
#
# You can find the full configuration reference here:
# https://www.elastic.co/guide/en/beats/filebeat/index.html
# For more available modules and options, please see the filebeat.reference.yml sample
# configuration file.
# ============================== Filebeat inputs ===============================
filebeat.inputs:
# Each - is an input. Most options can be set at the input level, so
# you can use different inputs for various configurations.
# Below are the input specific configurations.
- type: log
# Change to true to enable this input configuration.
enabled: true
# Paths that should be crawled and fetched. Glob based paths.
paths:
- /var/log/nginx/ferms/*.log
#- c:\programdata\elasticsearch\logs\*
# Exclude lines. A list of regular expressions to match. It drops the lines that are
# matching any regular expression from the list.
#exclude_lines: ['^DBG']
# Include lines. A list of regular expressions to match. It exports the lines that are
# matching any regular expression from the list.
#include_lines: ['^ERR', '^WARN']
# Exclude files. A list of regular expressions to match. Filebeat drops the files that
# are matching any regular expression from the list. By default, no files are dropped.
#exclude_files: ['.gz$']
# Optional additional fields. These fields can be freely picked
# to add additional information to the crawled log files for filtering
#fields:
# level: debug
# review: 1
### Multiline options
# Multiline can be used for log messages spanning multiple lines. This is common
# for Java Stack Traces or C-Line Continuation
# The regexp Pattern that has to be matched. The example pattern matches all lines starting with [
#multiline.pattern: ^\[
# Defines if the pattern set under pattern should be negated or not. Default is false.
#multiline.negate: false
# Match can be set to "after" or "before". It is used to define if lines should be append to a pattern
# that was (not) matched before or after or as long as a pattern is not matched based on negate.
# Note: After is the equivalent to previous and before is the equivalent to to next in Logstash
#multiline.match: after
# ============================== Filebeat modules ==============================
filebeat.config.modules:
# Glob pattern for configuration loading
path: ${path.config}/modules.d/*.yml
# Set to true to enable config reloading
reload.enabled: false
# Period on which files under path should be checked for changes
#reload.period: 10s
# ======================= Elasticsearch template setting =======================
setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false
# ================================== General ===================================
# The name of the shipper that publishes the network data. It can be used to group
# all the transactions sent by a single shipper in the web interface.
#name:
# The tags of the shipper are included in their own field with each
# transaction published.
#tags: ["service-X", "web-tier"]
# Optional fields that you can specify to add additional information to the
# output.
#fields:
# env: staging
# ================================= Dashboards =================================
# These settings control loading the sample dashboards to the Kibana index. Loading
# the dashboards is disabled by default and can be enabled either by setting the
# options here or by using the `setup` command.
#setup.dashboards.enabled: false
# The URL from where to download the dashboards archive. By default this URL
# has a value which is computed based on the Beat name and version. For released
# versions, this URL points to the dashboard archive on the artifacts.elastic.co
# website.
#setup.dashboards.url:
# =================================== Kibana ===================================
# Starting with Beats version 6.0.0, the dashboards are loaded via the Kibana API.
# This requires a Kibana endpoint configuration.
setup.kibana:
# Kibana Host
# Scheme and port can be left out and will be set to the default (http and 5601)
# In case you specify and additional path, the scheme is required: http://localhost:5601/path
# IPv6 addresses should always be defined as: https://[2001:db8::1]:5601
#host: "localhost:5601"
# Kibana Space ID
# ID of the Kibana Space into which the dashboards should be loaded. By default,
# the Default Space will be used.
#space.id:
# =============================== Elastic Cloud ================================
# These settings simplify using Filebeat with the Elastic Cloud (https://cloud.elastic.co/).
# The cloud.id setting overwrites the `output.elasticsearch.hosts` and
# `setup.kibana.host` options.
# You can find the `cloud.id` in the Elastic Cloud web UI.
#cloud.id:
# The cloud.auth setting overwrites the `output.elasticsearch.username` and
# `output.elasticsearch.password` settings. The format is `<user>:<pass>`.
#cloud.auth:
# ================================== Outputs ===================================
# Configure what output to use when sending the data collected by the beat.
# ---------------------------- Elasticsearch Output ----------------------------
#output.elasticsearch:
# Array of hosts to connect to.
hosts: ["localhost:9200"]
# Protocol - either `http` (default) or `https`.
#protocol: "https"
# Authentication credentials - either API key or username/password.
#api_key: "id:api_key"
#username: "elastic"
#password: "changeme"
# ------------------------------ Logstash Output -------------------------------
#output.logstash:
# The Logstash hosts
#hosts: ["localhost:5044"]
# Optional SSL. By default is off.
# List of root certificates for HTTPS server verifications
#ssl.certificate_authorities: ["/etc/pki/root/ca.pem"]
# Certificate for SSL client authentication
#ssl.certificate: "/etc/pki/client/cert.pem"
# Client Certificate Key
#ssl.key: "/etc/pki/client/cert.key"
# ================================== Logging ===================================
# Sets log level. The default log level is info.
# Available log levels are: error, warning, info, debug
logging.level: debug
# At debug level, you can selectively enable logging only for some components.
# To enable all selectors use ["*"]. Examples of other selectors are "beat",
# "publish", "service".
#logging.selectors: ["*"]
# ================================== Outputs ===================================
output.kafka:
hosts: ["192.168.16.121:9092", "xxx.xxx.xxx.xxx:9092","yyy.yyy.yyy.yyy:9092","zzz.zzz.zzz.zzz:9092"]
topic: "fee"
# ============================= X-Pack Monitoring ==============================
# Filebeat can export internal metrics to a central Elasticsearch monitoring
# cluster. This requires xpack monitoring to be enabled in Elasticsearch. The
# reporting is disabled by default.
# Set to true to enable the monitoring reporter.
#monitoring.enabled: false
# Sets the UUID of the Elasticsearch cluster under which monitoring data for this
# Filebeat instance will appear in the Stack Monitoring UI. If output.elasticsearch
# is enabled, the UUID is derived from the Elasticsearch cluster referenced by output.elasticsearch.
#monitoring.cluster_uuid:
# Uncomment to send the metrics to Elasticsearch. Most settings from the
# Elasticsearch output are accepted here as well.
# Note that the settings should point to your Elasticsearch *monitoring* cluster.
# Any setting that is not set is automatically inherited from the Elasticsearch
# output configuration, so if you have the Elasticsearch output configured such
# that it is pointing to your Elasticsearch monitoring cluster, you can simply
# uncomment the following line.
#monitoring.elasticsearch:
# ============================== Instrumentation ===============================
# Instrumentation support for the filebeat.
#instrumentation:
# Set to true to enable instrumentation of filebeat.
#enabled: false
# Environment in which filebeat is running on (eg: staging, production, etc.)
#environment: ""
# APM Server hosts to report instrumentation results to.
#hosts:
# - http://localhost:8200
# API Key for the APM Server(s).
# If api_key is set then secret_token will be ignored.
#api_key:
# Secret token for the APM Server(s).
#secret_token:
# ================================= Processors =================================
processors:
- drop_fields:
fields: ["@timestamp", "@metadata", "log", "input", "ecs", "host", "agent"]
# ================================= Migration ==================================
# This allows to enable 6.7 migration aliases
#migration.6_to_7.enabled: true
- 启动
# 启动
systemctl start filebeat.service
# 开机启动
systemctl enable filebeat.service
- 看状态,此时应该会显示激活中
# 状态
systemctl status filebeat.service
# 停止
systemctl stop filebeat.service
- 开启消费者
./kafka-console-consumer.sh --bootstrap-server=192.168.153.199:9092 --topic fee --from-beginning
- 遇到问题可以配置日志级别debug
# vim /etc/filebeat/filebeat.yml
logging.level: debug
tail -f /var/lib/filebeat/registry/filebeat/log.json
- 会收到filebeat生产的文件,此时访问上面fee里写的网页,触发发送给打点服务器产生日志,filebeat生产消息,kafka拿到消息给消费者,这部分就完成了。