基于kafka(3.0版本之前)的日志收集

基于kafka(3.0版本之前)的日志收集项目

什么是kafka

简介:

Kafka是由Apache软件基金会开发的一个开源流处理平台,由Scala和Java编写。Kafka是一种高吞吐量的分布式发布订阅消息系统,它可以处理消费者在网站中的所有动作流数据。 这种动作(网页浏览,搜索和其他用户的行动)是在现代网络上的许多社会功能的一个关键因素。 这些数据通常是由于吞吐量的要求而通过处理日志和日志聚合来解决。 对于像Hadoop一样的日志数据和离线分析系统,但又要求实时处理的限制,这是一个可行的解决方案。Kafka的目的是通过Hadoop的并行加载机制来统一线上和离线的消息处理,也是为了通过集群来提供实时的消息。

特性:

1、Kafka是一种高吞吐量的分布式发布订阅消息系统
2、通过O(1)的磁盘数据结构提供消息的持久化,这种结构对于即使数以TB的消息存储也能够保持长时间的稳定性能。
3、高吞吐量 :即使是非常普通的硬件Kafka也可以支持每秒数百万的消息。
4、支持通过Kafka服务器和消费机集群来分区消息。
5、支持Hadoop并行数据加载。

原理图:

在这里插入图片描述

一、kafka搭建流程

1、环境准备

准备四台虚拟机(实验所用,所以将nginx就搭建在kafka服务器上,其中三台搭建kafka集群和nginx集群,一台搭建mysql)

主机IP地址
nginx-kafka-1192.168.220.11
nginx-kafka-2192.168.220.105
nginx-kafka-3192.168.220.106
mysql192.168.220.107

配置好dns

[root@nginx-kafka01 ~]# cat /etc/resolv.conf
# Generated by NetworkManager
nameserver 114.114.114.114

修改主机名,并且在每一台机器上都写好域名解析

# 修改主机名
vim  /etc/hosthname
hostname -F /etc/hostname

# 每一台机器上都写好域名解析
vim  /etc/hosts
192.168.220.11  nginx-kafka01
192.168.220.105  nginx-kafka02
192.168.220.109  nginx-kafka03

同步时间,并且关闭防火墙和selinux(每台机都要操作)

[root@nginx-kafka01 ~]# yum -y install chrony
# 设置开机自启  disable 关闭开机自启
[root@nginx-kafka01 ~]# systemctl enable chronyd
[root@nginx-kafka01 ~]# systemctl start chronyd
# 设置时区
[root@nginx-kafka01 ~]# cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime

# 永久关闭firewalld
[root@nginx-kafka01 ~]# systemctl stop firewalld
[root@nginx-kafka01 ~]# systemctl disable firewalld

# 永久关闭selinux
[root@nginx-kafka01 ~]# vim /etc/selinux/config
使SELINUX=disabled

2、nginx的安装(之前有安装搭建的博客)

3、kafka的安装

1、下载java环境,kafka是由java写的,需要java环境

# 下载安装Java环境
[root@nginx-kafka01 ~]# yum install java wget  -y

2、安装kafka的包,并且解压

# 安装kafka的包
[root@nginx-kafka01 ~]# yum install wget -y 
[root@nginx-kafka01 ~]# wget https://mirrors.bfsu.edu.cn/apache/kafka/2.8.1/kafka_2.12-2.8.1.tgz

# 解压kafka的包
[root@nginx-kafka01 ~]# tar  xf  kafka_2.12-2.8.1.tgz
# 创建/opt/kafka文件夹,将安装包都放在其下
[root@nginx-kafka01 ~]# cd /opt/kafka/
[root@nginx-kafka01 kafka]# ls
kafka_2.12-2.8.1  kafka_2.12-2.8.1.tgz

3、配置kafka

# 修改kafka_2.12-2.8.1/config/server.properties文件
# kafka集群中每个服务器的broker.id不相同,我设置的nginx-kafka01主机为1,nginx-kafka02主机为2,nginx-kafka03主机为3
broker.id=1
listeners=PLAINTEXT://nginx-kafka01:9092
zookeeper.connect=192.168.220.11:2181,192.168.220.105:2181,192.168.220.106:2181

4、zookeeper的安装

下载安装地址:https://archive.apache.org/dist/zookeeper/zookeeper-3.6.3/apache-zookeeper-3.6.3-bin.tar.gz

解压压缩包并修改配置文件

# 进入/opt/zookeeper/apache-zookeeper-3.6.3-bin/conf
# 复制配置文件
[root@nginx-kafka01 ~]# cp zoo_sample.cfg zoo.cfg
# 修改zoo.cfg, 添加如下三行:
[root@nginx-kafka01 ~]# vim zoo.cfg
server.1=192.168.220.11:3888:4888
server.2=192.168.220.105:3888:4888
server.3=192.168.220.106:3888:4888

# 3888和4888都是端口  一个用于数据传输,一个用于检验存活性和选举

创建/tmp/zookeeper目录 ,在目录中添加myid文件,文件内容就是本机指定的zookeeper id内容

# 如:在192.168.220.11机器上
# server是几myid就是几
echo 1 > /tmp/zookeeper/myid

5、启动zookeeper和kafka

注意:开启zk和kafka的时候,一定是先启动zk,再启动kafka
关闭服务的时候,kafka先关闭,再关闭zk

启动zookeeper

# 启动zookeeper
[root@nginx-kafka01 ~]# /opt/zookeeper/apache-zookeeper-3.6.3-bin/bin/zkServer.sh start
/usr/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED

# 查看zookeeper的状态
[root@nginx-kafka03 ~]# /opt/zookeeper/apache-zookeeper-3.6.3-bin/bin/zkServer.sh status
/bin/java
ZooKeeper JMX enabled by default
Using config: /opt/zookeeper/apache-zookeeper-3.6.3-bin/bin/../conf/zoo.cfg
Client port found: 2181. Client address: localhost. Client SSL: false.
Mode: leader
# 其中只有一个leader,其余都为follower

启动kafka

# 启动kafka
[root@nginx-kafka01 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-server-start.sh -daemon /opt/kafka/kafka_2.12-2.8.1/config/server.properties

# 进入zookeeper查看kafka的是否搭建成功
[root@nginx-kafka01 ~]# /opt/zookeeper/apache-zookeeper-3.6.3-bin/bin/zkCli.sh

[zk: localhost:2181(CONNECTED) 0] ls /
[admin, brokers, cluster, config, consumers, controller, controller_epoch, feature, isr_change_notification, latest_producer_id_block, log_dir_event_notification, sc, zookeeper]
[zk: localhost:2181(CONNECTED) 1] ls /brokers 
[ids, seqid, topics]
[zk: localhost:2181(CONNECTED) 2] ls /brokers/ids 
[1, 2, 3]
# 如果ids为[1, 2, 3]则代表kafka搭建启动成功

二、生产者消费者测试

首先创建一个分区为3,副本为3,名字叫zhang的标题,(通过broker2:192.168.220.105创建)

[root@nginx-kafka02 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-topics.sh --create --zookeeper 192.168.220.105:2181 --replication-factor 3 --partitions 3 --topic zhang

生产者测试

broker2:192.168.220.105充当生产者,指定标题为zhang

[root@nginx-kafka02 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-console-producer.sh --broker-list 192.168.220.105:9092 --topic zhang

在这里插入图片描述

消费者测试

broker3:192.168.220.106充当消费者,指定标题为zhang,且从头开始消费

[root@nginx-kafka03 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.220.106:9092 --topic zhang --from-beginning

在这里插入图片描述

三、filebeat抓取日志

1、filebeat的安装(在装有nginx服务器上安装)

# 导入rpm包
[root@nginx-kafka03 ~]# rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

# 编辑vim /etc/yum.repos.d/fb.repo
[elastic-7.x]
name=Elastic repository for 7.x packages
baseurl=https://artifacts.elastic.co/packages/7.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

# 采用yum安装
[root@nginx-kafka03 ~]# yum install filebeat -y

# 查看
[root@nginx-kafka03 ~]# rpm -qa | grep filebeat  # 可以查看filebeat有没有安装  rpm -qa 是查看机器上安装的所有软件包
filebeat-7.17.10-1.x86_64
[root@nginx-kafka03 ~]# rpm -ql filebeat		# 查看filebeat安装到哪里去了,牵扯的文件有哪些

2、启动filebeat并设置开机自启

# 启动filebeat
[root@nginx-kafka03 ~]# systemctl start filebeat
# 设置开机自启
[root@nginx-kafka03 ~]# systemctl enable filebeat

# 查看filebeat状态
# 查看进程
[root@nginx-kafka03 ~]# ps aux | grep filebeat
root        918  0.0  2.4 1072904 45428 ?       Ssl  10:25   0:08 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/filebeat --path.logs /var/log/filebeat
root     108199  0.0  0.0 112832   988 pts/1    S+   20:52   0:00 grep --color=auto filebeat
# 通过systemctl查看状态
[root@nginx-kafka03 ~]# 
[root@nginx-kafka03 ~]# systemctl status filebeat
● filebeat.service - Filebeat sends log files to Logstash or directly to Elasticsearch.
   Loaded: loaded (/usr/lib/systemd/system/filebeat.service; enabled; vendor preset: disabled)
   Active: active (running) since 二 2023-06-20 10:25:08 CST; 10h ago
     Docs: https://www.elastic.co/beats/filebeat
 Main PID: 918 (filebeat)
    Tasks: 6
   Memory: 45.8M
   CGroup: /system.slice/filebeat.service
           └─918 /usr/share/filebeat/bin/filebeat --environment systemd -c /etc/filebeat/filebeat.yml --path.home /usr/share/filebeat --path.config /etc/filebeat --path.data /var/lib/f...

620 20:49:11 nginx-kafka03 filebeat[918]: 2023-06-20T20:49:11.376+0800        INFO        [monitoring]        log/log.go:184        Non-zero metrics in the last 30s        {"monito...
6月 20 20:49:41 nginx-kafka03 filebeat[918]: 2023-06-20T20:49:41.377+0800        INFO        [monitoring]        log/log.go:184        Non-zero metrics in the last 30s        {"monito...
620 20:50:11 nginx-kafka03 filebeat[918]: 2023-06-20T20:50:11.385+0800        INFO        [monitoring]        log/log.go:184        Non-zero metrics in the last 30s        {"monito...
6月 20 20:50:41 nginx-kafka03 filebeat[918]: 2023-06-20T20:50:41.376+0800        INFO        [monitoring]        log/log.go:184        Non-zero metrics in the last 30s        {"monito...
...
Hint: Some lines were ellipsized, use -l to show in full.

3、配置filebeat的配置文件

[root@nginx-kafka03 ~]# cat /etc/filebeat/filebeat.yml 
filebeat.inputs:
- type: log
  # Change to true to enable this input configuration.
  enabled: true
  # Paths that should be crawled and fetched. Glob based paths.
  paths:
    - /usr/local/nginx/logs/access.log
  #==========------------------------------kafka-----------------------------------
output.kafka:
  hosts: ["192.168.220.11:9092","192.168.220.105:9092","192.168.220.106:9092"]
  topic: nginxlog
  keep_alive: 10s
  
# filebeat充当生产者,去取nginx中的日志即(/usr/local/nginx/logs/access.log)文件,取到过后将其推送给kafka集群即(hosts指定的三个kafka服务器),指定标题为nginxlog,并且设置每10秒推送一次。

4、测试

1、首先清空nginx的日志文件(清除工作)

[root@nginx-kafka03 ~]# >/usr/local/nginx/logs/access.log

2、创建主题nginxlog

[root@nginx-kafka02 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-topics.sh --create --zookeeper 192.168.220.105:2181 --replication-factor 3 --partitions 1 --topic nginxlog

3、创建消费者(消费的是filebeat生产者抓取的nginx的log)

[root@nginx-kafka02 ~]# /opt/kafka/kafka_2.12-2.8.1/bin/kafka-console-consumer.sh --bootstrap-server 192.168.220.106:9092 --topic nginxlog --from-beginning
OpenJDK 64-Bit Server VM warning: If the number of processors is expected to increase from one, then you should configure the number of parallel GC threads appropriately using -XX:ParallelGCThreads=N
{"@timestamp":"2023-06-19T04:24:13.415Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.17.10"},"log":{"offset":4444,"file":{"path":"/usr/local/nginx/logs/access.log"}},"message":"192.168.220.1 - - [19/Jun/2023:12:24:06 +0800] \"GET / HTTP/1.1\" 200 625 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36\"","input":{"type":"log"},"ecs":{"version":"1.12.0"},"host":{"name":"nginx-kafka03"},"agent":{"id":"a8249178-4c1c-46a3-93a0-7e42f0e925fe","name":"nginx-kafka03","type":"filebeat","version":"7.17.10","hostname":"nginx-kafka03","ephemeral_id":"d862d589-9ce0-4bb7-b06f-14e5a8914f24"}}
{"@timestamp":"2023-06-19T04:24:20.429Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.17.10"},"input":{"type":"log"},"host":{"name":"nginx-kafka03"},"agent":{"id":"a8249178-4c1c-46a3-93a0-7e42f0e925fe","name":"nginx-kafka03","type":"filebeat","version":"7.17.10","hostname":"nginx-kafka03","ephemeral_id":"d862d589-9ce0-4bb7-b06f-14e5a8914f24"},"ecs":{"version":"1.12.0"},"log":{"offset":4634,"file":{"path":"/usr/local/nginx/logs/access.log"}},"message":"192.168.220.1 - - [19/Jun/2023:12:24:16 +0800] \"GET /favicon.ico HTTP/1.1\" 404 555 \"http://192.168.220.106/\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36\""}
{"@timestamp":"2023-06-19T04:25:25.447Z","@metadata":{"beat":"filebeat","type":"_doc","version":"7.17.10"},"log":{"offset":4857,"file":{"path":"/usr/local/nginx/logs/access.log"}},"message":"192.168.220.1 - - [19/Jun/2023:12:25:17 +0800] \"GET / HTTP/1.1\" 304 0 \"-\" \"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/114.0.0.0 Safari/537.36\"","input":{"type":"log"},"ecs":{"version":"1.12.0"},"host":{"name":"nginx-kafka03"},"agent":{"id":"a8249178-4c1c-46a3-93a0-7e42f0e925fe","name":"nginx-kafka03","type":"filebeat","version":"7.17.10","hostname":"nginx-kafka03","ephemeral_id":"d862d589-9ce0-4bb7-b06f-14e5a8914f24"}}

四、实现数据入库

1、mysql的安装(之前有安装搭建的博客)

2、在mysql创建存放kafka日志的数据库和表

root@kafka 21:31  mysql>create database kafka
root@kafka 21:31  mysql>show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| kafka              |
| mysql              |
| performance_schema |
| sys                |
+--------------------+
6 rows in set (0.00 sec)
root@kafka 21:31  mysql>use kafka
root@kafka 21:32  mysql>create table nginxlog(id int primary key auto_increment, dt datetime not null, prov_name varchar(256), isp_name varchar(256))charset=utf8;
root@kafka 21:31  mysql>use kafka
Database changed
root@kafka 21:33  mysql>show tables;
+-----------------+
| Tables_in_kafka |
+-----------------+
| nginxlog        |
+-----------------+
1 row in set (0.00 sec)

3、编写python脚本实现kafka数据入库

脚本要求获取好的nginx日志,提取出ip,时间,带宽字段。提取出的ip字段通过淘宝的一个接口解析出省份和运营商。并且格式化时间字段 “2021-10-12 12:00:00”。后存入数据库

import json
import requests
import time
import pymysql

taobao_url = "https://ip.taobao.com/outGetIpInfo?accessKey=alibaba-inc&ip="


# 查询ip地址的信息(省份和运营商isp),通过taobao网的接口
def resolv_ip(ip):
    response = requests.get(taobao_url + ip)
    if response.status_code == 200:
        tmp_dict = json.loads(response.text)
        prov = tmp_dict["data"]["region"]
        isp = tmp_dict["data"]["isp"]
        return prov, isp
    return None, None


# 将日志里读取的格式转换为我们指定的格式
def trans_time(dt):
    # 把字符串转成时间格式
    timeArray = time.strptime(dt, "%d/%b/%Y:%H:%M:%S")
    # timeStamp = int(time.mktime(timeArray))
    # 把时间格式转成字符串
    new_time = time.strftime("%Y-%m-%d %H:%M:%S", timeArray)
    return new_time


# 从kafka里获取数据,清洗为我们需要的ip,时间,带宽
from pykafka import KafkaClient

client = KafkaClient(hosts="192.168.220.11:9092,192.168.220.105:9092,192.168.220.106:9092")
topic = client.topics['nginxlog']
balanced_consumer = topic.get_balanced_consumer(
    consumer_group='testgroup',
    # 自动提交offset
    auto_commit_enable=True,
    zookeeper_connect='nginx-kafka01:2181,nginx-kafka02:2181,nginx-kafka03:2181'
)
# consumer = topic.get_simple_consumer()
for message in balanced_consumer:
    if message is not None:
        try:
            line = json.loads(message.value.decode("utf-8"))
            log = line["message"]
            tmp_lst = log.split()
            ip = tmp_lst[0]
            dt = tmp_lst[3].replace("[", "")
            bt = tmp_lst[9]
            dt = trans_time(dt)
            prov, isp = resolv_ip(ip)
            if prov and isp:
                print(dt, prov, isp)
                md = pymysql.connect(host='192.168.220.107', user='zjx', password='123456')
                cur = md.cursor()
                cur.execute('show databases;')
                print(cur.fetchall())
                cur.execute('use kafka')
                cur.execute('show tables;')
                print(cur.fetchall())
                try:
                    cur.execute('insert into nginxlog (dt,prov_name,isp_name) values ("%s","%s","%s")' % (dt, prov, isp))
                    md.commit()
                    print("保存成功")
                except Exception as err:
                    md.rollback()
                md.close()
                cur.close()
        except:
            pass

执行脚本

[root@nginx-kafka02 ~]# python3 python_consumer.py

效果图

脚本执行结果

在这里插入图片描述
数据库入库结果
在这里插入图片描述

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值