1.RDD中.filter函数过滤带“ERROR”的行
-----------------------------------------------------------------
val errors = file.filter(line => line.contains("ERROR"))
errors.count()
2.Spark追求的目标:像编写单机程序一样编写分布式程序
3.分布式数据架构,弹性分布式数据集RDD的两种创建方式
----------------------------------------------------------------
a:从Hadoop文件系统创建
b:从父RDD转换新RDD
4.DSM:传统的共享内存系统(区别于RDD)
5.AKKA:基于Scala的Spark通信框架
6.容错机制:Spark选择记录更新方式(另一种是数据检查点)
Lineage机制,Checkpoint机制,Shuffle机制
-----------------------------------------------------------------------------------------------
-----------------------------------------------------------------------------------------------
WordCount:统计文件中的词频
package ymhd
import org.apache.log4j.{Level, Logger}
import org.apache.spark._
import SparkContext._
import scala.collection.mutable.ListBuffer
/**
* Created by sendoh on 2015/4/6.
*/
object WordCount {
def main(args: Array[String]): Unit ={
//
Logger.getLogger("org.apache.spark").setLevel(Level.WARN)
Logger.getLogger("org.eclipse.jetty.server").setLevel(Level.OFF)
//
if (args.length != 3){
println("Usage: java -jar code.jar dependency_jars file_locaion save_location")
System.exit(0)
}
//
val jars = ListBuffer[String]()
args(0).split(',').map(jars += _)
//
val conf = new SparkConf().setAppName("WordCount").setSparkHome("/usr/local/spark-1.2.0-bin-hadoop2.4").setJars(jars)setMaster("spark://192.168.30.129:7077")
val sc = new SparkContext(conf)
//
val textRDD = sc.textFile("hdfs://localhost:9000/datatnt/textworda.txt")
//val result = textRDD.flatMap(_.split("\t").toString()).map(word => (word, 1)).reduceByKey(_ + _).saveAsSequenceFile("hdfs://localhost:9000/outputtnt/wordcount")
val result = textRDD.flatMap(line => line.split(" ")).map(word => (word, 1)).reduceByKey(_ + _).saveAsSequenceFile("hdfs://localhost:9000/outputtnt/wordcount")
}
}