redis日志mysql_Flume+Kafka+SparkStreaming+Redis+Mysql做的实时日志分析ip访问次数-Go语言中文社区...

新手学习,如有错误请指正,感谢!

1.启动zookeeper和kafka,并建立一个topic为test-fkss,为了方便观察我是通过kafka-manager添加

02e7a75c01f499cc5a87f2e015ed0282.png

2.配置Flume并启动,监听文件为/home/czh/docker-public-file/testflume.log,发送给kafka

a1.sources = r1

a1.sinks = k1

a1.channels = c1

a1.sources.r1.type = exec

a1.sources.r1.command = tail -F /home/czh/docker-public-file/testflume.log

a1.sources.r1.channels = c1

a1.sinks.k1.type = org.apache.flume.sink.kafka.KafkaSink

a1.sinks.k1.topic = test-fkss

a1.sinks.k1.brokerList = 172.17.0.2:9092

a1.sinks.k1.requiredAcks = 1

a1.sinks.k1.batchSize = 2

a1.sinks.k1.channel = c1

a1.channels.c1.type = memory

a1.channels.c1.capacity = 1000

a1.channels.c1.transactionCapacity = 100

在flume文件夹执行启动命令bin/flume-ng agent --conf conf --conf-file conf/flume-conf.properties.template --name a1 -Dflume.root.logger=INFO,console

a5de0ae4d3647f1be93b7300ac4a21aa.png

出现最下面一行说明启动成功

3.启动redis

0812b24e266a468d5e38c21e7b851d31.png

4.启动sparkstreaming日志处理程序,具体代码如下

分析程序SparkStreamDemo.scala

import kafka.serializer.StringDecoder

import org.apache.spark.streaming.kafka.KafkaUtils

import org.apache.spark.streaming.{Seconds, StreamingContext}

import org.apache.spark.{SparkConf, SparkContext}

object SparkStreamDemo {

def main(args: Array[String]) {

val conf = new SparkConf()

conf.setAppName("flume_spark_streaming")

conf.setMaster("local[4]")

val sc = new SparkContext(conf)

sc.setCheckpointDir("/home/czh/IdeaProjects/SparkDemo/out/artifacts")

sc.setLogLevel("ERROR")

val ssc = new StreamingContext(sc, Seconds(3))

val kafkaParams = Map[String, String](

"bootstrap.servers" -> "172.17.0.2:9092",

"group.id" -> "test-fkss",

"auto.offset.reset" -> "smallest"

)

val topics = Set("test-fkss");

val stream = createStream(ssc, kafkaParams, topics)

val lines = stream.map(_._2)

lines.foreachRDD { rdd =>

rdd.foreachPartition { partitionOfRecords =>

partitionOfRecords.foreach(pair => {

val time = parseLog(pair)._1

val ip = parseLog(pair)._2

println(time+" "+ip)

val jedis = RedisClient.pool.getResource

jedis.select(0)

if(null == jedis.get(ip))

jedis.set(ip, "1");

else

jedis.incr(ip)

val at = jedis.get(ip).toInt

val c = new JDBCCommend()

c.add(time, ip, at)

jedis.close()

})

}

}

ssc.start()

ssc.awaitTermination()

}

def createStream(scc: StreamingContext, kafkaParam: Map[String, String], topics: Set[String]) = {

KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](scc, kafkaParam, topics)

}

def parseLog(log: String): (String, String) = {

var time = ""

var ip = ""

try{

time = log.split(" ")(0)

ip = log.split(" ")(1)

}

catch {

case e: Exception =>

e.printStackTrace()

}

(time, ip)

}

}

Redis连接池RedisClient.scala

import org.apache.commons.pool2.impl.GenericObjectPoolConfig

import redis.clients.jedis.JedisPool

object RedisClient extends Serializable {

val redisHost = "127.0.0.1"

val redisPort = 6599

val redisTimeout = 30000

lazy val pool = new JedisPool(new GenericObjectPoolConfig(), redisHost, redisPort, redisTimeout)

lazy val hook = new Thread {

override def run = {

println("Execute hook thread: " + this)

pool.destroy()

}

}

sys.addShutdownHook(hook.run)

}

JDBC连接JDBCConnection.scala

import java.sql.{Connection, DriverManager}

object JDBCConnection {

val IP = "172.17.0.2"

val Port = "3306"

val DBType = "mysql"

val DBName = "st"

val username = "root"

val password = "root"

val url = "jdbc:" + DBType + "://" + IP + ":" + Port + "/" + DBName

classOf[com.mysql.jdbc.Driver]

def getConnection(): Connection = {

DriverManager.getConnection(url, username, password)

}

def close(conn: Connection): Unit = {

try {

if (!conn.isClosed() || conn != null) {

conn.close()

}

}

catch {

case ex: Exception => {

ex.printStackTrace()

}

}

}

}

JDBC操作JDBCCommend.scala

class JDBCCommend {

case class Ip(time: String, ip: String, access_times: Int)

def add(time: String, ip: String, access_times: Int): Boolean = {

val conn = JDBCConnection.getConnection()

try {

val sql = new StringBuilder()

.append("INSERT INTO ip_table (time, ip, access_times)")

.append("VALUES(?, ?, ?)")

val ps = conn.prepareStatement(sql.toString())

ps.setObject(1, time)

ps.setObject(2, ip)

ps.setObject(3, access_times)

ps.executeUpdate() > 0

}

finally {

conn.close()

}

}

}

pom.xml

4.0.0

com

czh

1.0-SNAPSHOT

2008

2.11.0

scala-tools.org

Scala-Tools Maven2 Repository

http://scala-tools.org/repo-releases

scala-tools.org

Scala-Tools Maven2 Repository

http://scala-tools.org/repo-releases

org.scala-lang

scala-library

${scala.version}

org.specs

specs

1.2.5

test

org.apache.spark

spark-core_2.11

2.3.2

org.apache.spark

spark-sql_2.11

2.3.2

mysql

mysql-connector-java

5.1.12

org.apache.spark

spark-streaming_2.11

2.3.2

org.apache.spark

spark-streaming-kafka_2.11

1.6.3

redis.clients

jedis

2.9.0

com.alibaba

fastjson

1.2.47

src/main/scala

src/test/scala

org.scala-tools

maven-scala-plugin

compile

testCompile

${scala.version}

-target:jvm-1.5

org.apache.maven.plugins

maven-eclipse-plugin

true

ch.epfl.lamp.sdt.core.scalabuilder

ch.epfl.lamp.sdt.core.scalanature

org.eclipse.jdt.launching.JRE_CONTAINER

ch.epfl.lamp.sdt.launching.SCALA_CONTAINER

org.scala-tools

maven-scala-plugin

${scala.version}

5.启动模拟ip访问产生日志程序

write-flumelog.py

import time

import random

for i in range(10):

f = open('/home/czh/docker-public-file/testflume.log', 'a')

time.sleep(random.randint(4, 9) * 0.1)

current_time = time.time()

ip = str(random.randint(4, 5)) + "." + str(random.randint(4, 4))

+ "." + str(random.randint(4, 5)) + "." + str(random.randint(4, 4))

print(str(current_time) + " " + ip + "n")

f.write(current_time.__str__() + " " + ip + "n")

f.close()

整体运行结果如下:

python程序写入ip和访问时间到log文件

ce1dbecef9957797f5e9154d41e2a4b8.png 

50a8344def582ca9eb7383614d1ddd4a.png

sparkstreaming程序实时获取flume传入的数据计算访问次数写入mysql

744a690d56aaa80c2164561aca653495.png

52669b97daf3289be638b79674c7aee0.png

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值