spark:仿写sparkstreaming例子--15

今天学习了一些比较简单的sparkstreaming原理和事例,终于明白spark为什么是实时的处理数据···

1.HdfsWordcount 对机器上/datatnt/text/文件夹实时监听。

package sprakstreaming

/**
 * Created by sendoh on 2015/3/23.
 */
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.StreamingContext._
//对监听的文件目录实时的显示其中单词的数量
object HdfsWordcount {
  def main(args: Array[String]): Unit ={
    val sparkConf = new SparkConf().setAppName("HdfsWordCount").setMaster("local[2]")
    //val sparkConf = new SparkConf().setAppName("HdfsWordCount").setMaster("local[2]") 本地运行不放在集群上运行,sparkstreaming设置进程大于两个,一个用于监听,其它用于运行
    val ssc = new StreamingContext(sparkConf, Seconds(20))//每隔20毫秒监听一次
    val lines = ssc.textFileStream("/datatnt/text/")//要监听的问价目录
    val words = lines.flatMap(_.split(" ")) //压扁切片
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _) //单词统计
    wordCounts.print()//显示结果
    ssc.start()
    ssc.awaitTermination()
  }

}
2.销售模拟器:可以读取文件并将文件的每一行随机发出去
package sprakstreaming

/**
 * Created by sendoh on 2015/3/23.
 */
import java.io.{PrintWriter}
import java.net.ServerSocket
import scala.io.Source

object SaleSimulation {
  def index(length: Int) = {
    import java.util.Random
    val rdm = new Random
    rdm.nextInt(length)
  }
  def main(args: Array[String]): Unit ={
    if (args.length != 3){
      System.err.println("Usage: <filename> <port> <millisecond>")//文件名:模拟器发送数据的端口:时间间隔
      System.exit(1)
    }
    val filename = args(0)
    val lines = Source.fromFile(filename).getLines.toList
    val filerow = lines.length
    val listener = new ServerSocket(args(1).toInt)//监听
    while (true){
      val socket = listener.accept()
      new Thread(){
        override def run = {
          println("Got client connected from: " + socket.getInetAddress)
          val out = new PrintWriter(socket.getOutputStream(), true)//发送数据
          while(true){
            Thread.sleep(args(2).toLong)//设定时间
            val content = lines(index(filerow))
            println(content)
            out.write(content + '\n')
            out.flush()
          }
          socket.close()
        }
      }.start()
    }
  }
}
将模拟器打包,在spark目录下运行文件:

:java -cp sparkstreamingtext.jar sparkstreamingtext.SaleSimulation /datatnt/text.txt 9999 1000 //发送datatnt文件夹下的text文件 9999:端口,1000:1秒。

3.网络数据的演示:

package sprakstreaming

/**
 * Created by sendoh on 2015/3/23.
 */
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.streaming.{Milliseconds, Seconds, StreamingContext}
import org.apache.spark.streaming.StreamingContext._
import org.apache.spark.storage.StorageLevel

object NetworkWordCount {
  def main(args: Array[String]): Unit ={
    val conf = new SparkConf().setAppName("NetworkWordCount").setMaster("local[2]")
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(5))//五秒监听一次
    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)//读取网络接口的流数据(服务器:端口9999)
    val words = lines.flatMap(_.split(","))
    val wordCounts = words.map(x => (x, 1)).reduceByKey(_ + _)

    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

IDEA run configuration,对problem argument设置:服务器 + 端口 运行

以上是sparkstreaming关于无状态的操作

4.///有状态的操作

<pre name="code" class="plain">package sprakstreaming

/**
 * Created by sendoh on 2015/3/23.
 */
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.streaming.StreamingContext._

object StatefulWordCount {
  def main(args: Array[String]): Unit ={
    val updateFunc = (values: Seq[Int], state: Option[Int]) => {  //新进来的值+状态(将被统计的值)
      val currentCount = values.foldLeft(0)(_ + _)//新进来的值加上以前的值
      val previousCount = state.getOrElse(0)//初始化为0
      Some(currentCount + previousCount)//返回一个新的值
    }
    val conf = new SparkConf().setAppName("StatefulWordCount").setMaster("local[2]")
    val sc = new SparkContext(conf)
    //创建StreamingContext
    val ssc = new StreamingContext(sc, Seconds(5))
    ssc.checkpoint(".")
    //获取数据
    val lines = ssc.socketTextStream(args(0), args(1).toInt)
    val words = lines.flatMap(_.split("."))
    val wordCounts = words.map(x => (x, 1))
    //使用updateStateByKey来更新状态
    val stateDstream = wordCounts.updateStateByKey[Int](updateFunc)//调用updateFunc方法
    stateDstream.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

 IDEA run configuration,对problem argument设置:服务器 + 端口 运行 

5.

package sprakstreaming

/**
 * Created by sendoh on 2015/3/23.
 */
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming._
import org.apache.spark.streaming.StreamingContext._

object WindowWordCount {
  def main(args: Array[String]): Unit ={
    val conf = new SparkConf().setAppName("WindowWordCount").setMaster("local[2]")
    val sc = new SparkContext(conf)
    //创建StreamingContext
    val ssc = new StreamingContext(sc, Seconds(5))
    ssc.checkpoint(".")
    //获取数据
    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
    val words = lines.flatMap(_.split(","))
    //windows操作
    val wordCounts = words.map(x => (x, 1)).reduceByKeyAndWindow((a:Int, b:Int) => (a + b), Seconds(args(2).toInt), Seconds(args(3).toInt))
    //a+b:windows的大小,接下来是windows的时间间隔+windows所移动的时间间隔*(必须是streamingcontext的时间倍数)
    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}
IDEA run configuration,对problem argument设置:服务器 + 端口+windows时间间隔+窗口滑动时间间隔   运行
6.sale数据演示
//qryStockDetail.txt文件定义了订单明细
//订单号,行号,货品,数量,单价,金额
使用方法:
java -cp sparkstreamingtext.jar sparkstreamingtext.SaleSimulation /datatnt/qryStockDetail.txt 9999 1000

package sprakstreaming

import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.{Seconds, StreamingContext}
import org.apache.spark.{SparkContext, SparkConf}
import org.apache.spark.streaming.StreamingContext._
/**
 * Created by sendoh on 2015/3/23.
 */
object SaleAmount {
  def main(args: Array[String]): Unit ={
    val conf = new SparkConf().setAppName("SaleAmount").setMaster("local[2]")
    val sc = new SparkContext(conf)
    val ssc = new StreamingContext(sc, Seconds(5))
    val lines = ssc.socketTextStream(args(0), args(1).toInt, StorageLevel.MEMORY_AND_DISK_SER)
    val words = lines.map(_.split(",")).filter(_.length == 6)//判定对6列队数据进行操作
    val wordCounts = words.map(x => (1, x(5).toDouble)).reduceByKey(_ + _)

    wordCounts.print()
    ssc.start()
    ssc.awaitTermination()
  }
}

IDEA run configuration,对problem argument设置:服务器 + 端口+windows时间间隔+窗口滑动时间间隔   运行



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值