transform方法把DStream转换成新的RDD
需求:黑名单过滤。
访问日志 ==>DStream
20180718,sid
20180718,lee
20180718,leo
==>(sid:20180718,sid)(lee:20180718,lee)(leo:20180718,leo)
leftjoin
黑名单表 ==>RDD
lee
leo
==> (lee:true)(leo:true)
结果
sid:[<20180718,sid>,<false>]) X
lee:[<20180718,lee>,<true>]) X
leo:[<20180718,leo>,<true>]) ==> tuple 1
项目目录
pom.xml
<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.sid.spark</groupId>
<artifactId>spark-train</artifactId>
<version>1.0</version>
<inceptionYear>2008</inceptionYear>
<properties>
<scala.version>2.11.8</scala.version>
<kafka.version>0.9.0.0</kafka.version>
<spark.version>2.2.0</spark.version>
<hadoop.version>2.9.0</hadoop.version>
<hbase.version>1.4.4</hbase.version>
</properties>
<repositories>
<repository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</repository>
</repositories>
<pluginRepositories>
<pluginRepository>
<id>scala-tools.org</id>
<name>Scala-Tools Maven2 Repository</name>
<url>http://scala-tools.org/repo-releases</url>
</pluginRepository>
</pluginRepositories>
<dependencies>
<dependency>
<groupId>org.scala-lang</groupId>
<artifactId>scala-library</artifactId>
<version>${scala.version}</version>
</dependency>
<dependency>
<groupId>org.apache.kafka</groupId>
<artifactId>kafka_2.11</artifactId>
<version>${kafka.version}</version>
</dependency>
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>${hadoop.version}</version>
<exclusions>
<exclusion>
<artifactId>servlet-api</artifactId>
<groupId>javax.servlet</groupId>
</exclusion>
</exclusions>
</dependency>
<!--<dependency>-->
<!--<groupId>org.apache.hbase</groupId>-->
<!--<artifactId>hbase-clinet</artifactId>-->
<!--<version>${hbase.version}</version>-->
<!--</dependency>-->
<!--<dependency>-->
<!--<groupId>org.apache.hbase</groupId>-->
<!--<artifactId>hbase-server</artifactId>-->
<!--<version>${hbase.version}</version>-->
<!--</dependency>-->
<dependency>
<groupId>org.apache.spark</groupId>
<artifactId>spark-streaming_2.11</artifactId>
<version>${spark.version}</version>
</dependency>
<dependency>
<groupId>net.jpountz.lz4</groupId>
<artifactId>lz4</artifactId>
<version>1.3.0</version>
</dependency>
<dependency>
<groupId>mysql</groupId>
<artifactId>mysql-connector-java</artifactId>
<version>5.1.31</version>
</dependency>
</dependencies>
<build>
<sourceDirectory>src/main/scala</sourceDirectory>
<testSourceDirectory>src/test/scala</testSourceDirectory>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<executions>
<execution>
<goals>
<goal>compile</goal>
<goal>testCompile</goal>
</goals>
</execution>
</executions>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
<args>
<arg>-target:jvm-1.5</arg>
</args>
</configuration>
</plugin>
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-eclipse-plugin</artifactId>
<configuration>
<downloadSources>true</downloadSources>
<buildcommands>
<buildcommand>ch.epfl.lamp.sdt.core.scalabuilder</buildcommand>
</buildcommands>
<additionalProjectnatures>
<projectnature>ch.epfl.lamp.sdt.core.scalanature</projectnature>
</additionalProjectnatures>
<classpathContainers>
<classpathContainer>org.eclipse.jdt.launching.JRE_CONTAINER</classpathContainer>
<classpathContainer>ch.epfl.lamp.sdt.launching.SCALA_CONTAINER</classpathContainer>
</classpathContainers>
</configuration>
</plugin>
</plugins>
</build>
<reporting>
<plugins>
<plugin>
<groupId>org.scala-tools</groupId>
<artifactId>maven-scala-plugin</artifactId>
<configuration>
<scalaVersion>${scala.version}</scalaVersion>
</configuration>
</plugin>
</plugins>
</reporting>
</project>
代码
package com.sid.spark
import org.apache.spark.SparkConf
import org.apache.spark.streaming.{Seconds, StreamingContext}
/**
* Created by jy02268879 on 2018/7/18.
*
* 黑名单过滤
*/
object TransformApp {
def main(args: Array[String]): Unit = {
/**
* 当运行Spark Streaming应用程序的时候如果使用的Local模式,
* 不要使用local或者local[1]作为master的URL。
* 因为这种写法意味着仅仅只有一个线程能被使用,
* 如果使用基于Receiver的input DStream(如果用的HDFS上面的文件就可以用local[1]或local),
* Receiver就已经占用了线程了,
* 主流程就处理不了数据了。
* 所以要用local[n],n>Receiver的数量。
* */
val sparkConf = new SparkConf().setMaster("local[2]").setAppName("SocketWordCount")
//构建StreamingContext
/**
* 创建StreamingContext需要两个参数:sparkConf和batch interval
* */
val ssc = new StreamingContext(sparkConf,Seconds(5))
/**
* 构建黑名单
* */
val blacks = List("lee","leo")
//把一个集合转换成RDD
val blacksRDD = ssc.sparkContext.parallelize(blacks).map(x=>(x,true))
val lines = ssc.socketTextStream("node1",6789)
/**
*
访问日志 ==>DStream
20180718,sid
20180718,lee
20180718,leo
==>(sid:20180718,sid)(lee:20180718,lee)(leo:20180718,leo)
leftjoin
黑名单表 ==>RDD
lew
leo
==>(lee:true)(leo:true)
结果
sid:[<20180718,sid>,<false>]) ==> tuple 1
lee:[<20180718,lee>,<true>]) X
leo:[<20180718,leo>,<true>]) X
*
* */
val clicklog = lines.map(x => (x.split(",")(1),x)).transform(rdd => {
rdd.leftOuterJoin(blacksRDD)
.filter(x => x._2._2.getOrElse(false) != true)
.map(x => x._2._1)
})
clicklog.print()
ssc.start()
ssc.awaitTermination()
}
}
启动nc -lk 6789
IDEA启动项目
在nc输入
20180718,sid
20180718,lee
20180718,leo