文章目录
Redis介绍
百度百科:Redis是一个开源的使用ANSI C语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value数据库,并提供多种语言的API。redis是一个key-value存储系统。
安装
1.下载redis3的稳定版本,下载地址http://download.redis.io/releases/redis-3.2.11.tar.gz
2.上传redis-3.2.11.tar.gz到服务器
3.解压redis源码包
tar -zxvf redis-3.2.11.tar.gz -C /usr/local/src/
4.进入到源码包中,编译并安装redis
cd /usr/local/src/redis-3.2.11/
make && make install
5.报错,缺少依赖的包
缺少gcc依赖(c的编译器)
6.配置本地YUM源并安装redis依赖的rpm包
yum -y install gcc
7.编译并安装
make && make install
8.报错,原因是没有安装jemalloc内存分配器,可以安装jemalloc或直接输入
make MALLOC=libc && make install
9.重新编译安装
make MALLOC=libc && make install
10.在所有机器的/usr/local/下创建一个redis目录,然后拷贝redis自带的配置文件redis.conf到/usr/local/redis
mkdir /usr/local/redis
cp /usr/local/src/redis-3.2.11/redis.conf /usr/local/redis
11.修改所有机器的配置文件redis.conf
daemonize yes #redis后台运行
appendonly yes #开启aof日志,它会每次写操作都记录一条日志
bind 192.168.1.207
12.启动所有的redis节点,任意目录均可启动
cd /usr/local/redis
redis-server /usr/local/redis/redis.conf
13.查看redis进程状态
ps -ef | grep redis
14.使用命令行客户的连接redis
redis-cli -h 192.168.145.101
redis-cli -p 6379
redis-cli -h 192.168.145.101 -p 6379
15.关闭redis
redis-cli shutdown
16.配置redis密码
1>查看当前redis有没有设置密码:
127.0.0.1:6379> config get requirepass
1) “requirepass”
2) “”
2>为以上显示说明没有密码,那么现在来设置密码:
127.0.0.1:6379> config set requirepass 123
OK
127.0.0.1:6379> config set requirepass 123 daemonize yes #redis后台运行
3>再次查看当前redis就提示需要密码:
127.0.0.1:6379> config get requirepass
(error) NOAUTH Authentication required. //没有权限执行改操作
127.0.0.1:6379> auth 123 //指定密码
Redis集群搭建
下载和编译,安装
wget http://download.redis.io/releases/redis-4.0.11.tar.gz
tar zxvf redis-4.0.11.tar.gz -C /usr/local/src/
cd /usr/local/src/redis-4.0.11
make MALLOC=libc && make install
在n1节点
mkdir -p /data/redis/{7000,7001}/data
mkdir -p /usr/local/redis-cluster/bin
mkdir -p /usr/local/redis-cluster/7000
cp -r /usr/local/bin/* /usr/local/redis-cluster/bin/
cp /usr/local/src/redis-4.0.11/redis.conf /usr/local/redis-cluster/7000/
#修改以下配置
vi /usr/local/redis-cluster/7000/redis.conf
bind ... #绑定本机IP
daemonize yes #开启后台运行
dir /data/redis/7000/data #数据存放路径
port 7000 #监听端口
pidfile /var/run/redis_7000.pid #pid文件
logfile /var/log/redis_7000.log #log文件
cluster-enabled yes #打开注释
cluster-config-file nodes.conf #打开注释
cluster-node-timeout 15000 #打开注释
appendonly yes #开启持久化
#7001节点配置
cp -rf /usr/local/redis-cluster/7000/ /usr/local/redis-cluster/7001/
sed -i ‘s/7000/7001/g’ /usr/local/redis-cluster/7001/redis.conf
#将配置文件copy到其他的节点
scp -r /usr/local/redis-cluster/{7000,7001} n2:/usr/local/redis-cluster/
scp -r /usr/local/redis-cluster/{7000,7001} n3:/usr/local/redis-cluster/
在n2节点
mkdir -p /data/redis/{7000,7001}/data
mkdir -p /usr/local/redis-cluster/bin
##cp mkreleasehdr.sh redis-benchmark redis-check-aof redis-cli redis-server redis-trib.rb /usr/local/redis-cluster/bin
cp -r /usr/local/bin/* /usr/local/redis-cluster/bin/
sed -i ‘s/192.168.145.201/192.168.145.202/g’ /usr/local/redis-cluster/7000/redis.conf
sed -i ‘s/192.168.145.201/192.168.145.202/g’ /usr/local/redis-cluster/7001/redis.conf
在n3节点
mkdir -p /data/redis/{7000,7001}/data
mkdir -p /usr/local/redis-cluster/bin
cp mkreleasehdr.sh redis-benchmark redis-check-aof redis-cli redis-server redis-trib.rb /usr/local/redis-cluster/bin
sed -i ‘s/192.168.145.201/192.168.145.203/g’ /usr/local/redis-cluster/7000/redis.conf
sed -i ‘s/192.168.145.201/192.168.145.203/g’ /usr/local/redis-cluster/7001/redis.conf
启动服务
在n1,n2,n3执行命令
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/7000/redis.conf
/usr/local/redis-cluster/bin/redis-server /usr/local/redis-cluster/7001/redis.conf
查看线程启动情况
ps -ef|grep redis
netstat -nplt|grep redis-server
安装ruby
如果你有多个虚拟机,rubygems只需要在一台机器上安装。
yum install rubygems
Ruby的redis接口没有安装,需要安装Redis接口,输入命令 " gem install redis " 进行安装,如下图:
redis requires Ruby version >= 2.3.0 系统默认 ruby 版本过低,导致 Redis 接口安装失败!
Centos7安装升级Ruby
参考连接:https://blog.csdn.net/qq_26440803/article/details/82717244
解决 Ruby 版本的需求后,输入命令 " gem install redis " 进行 Redis 接口安装,如下图:
构建集群:
cd /usr/local/src/redis-4.0.14/
src/redis-trib.rb create --replicas 1 n1:7000 n1:7001 n2:7000 n2:7001 n3:7000 n3:7001
#通过客户端命令连接上,通过集群命令看一下状态和节点信息等
/usr/local/redis-cluster/bin/redis-cli -c -h n1 -p 7001
Java操作Redis
redis的依赖
redis.clients
jedis
2.9.0
package day10
import redis.clients.jedis.{Jedis, JedisPool, JedisPoolConfig}
object JedisConnectionPool {
val config = new JedisPoolConfig()
//最大连接数
config.setMaxTotal(20)
//最大空闲
config.setMaxIdle(10)
//当调用borrow Object方法时,是否进行有效性检查
config.setTestOnBorrow(true)
//1000代表超时时间(10秒)
val pool = new JedisPool(config, "192.168.145.101", 6379, 10000, "123")
def getConnection(): Jedis = {
pool.getResource
}
def main(args: Array[String]): Unit = {
val conn = JedisConnectionPool.getConnection()
conn.set("income", "1000")
val r1 = conn.get("xiaoniu")
println(r1)
conn.incrBy("xiaoniu", -50)
val r2 = conn.get("xiaoniu")
println(r2)
conn.close()
val r = conn.keys("*")
import scala.collection.JavaConversions._
for(p <- r){
println(p + " : " + conn.get(p))
}
}
}
SparkStreaming程序计算多个指标
环境信息:kafka0.8+SparkStreaming+Redis
package day10
import kafka.common.TopicAndPartition
import kafka.message.MessageAndMetadata
import kafka.serializer.StringDecoder
import kafka.utils.{ZKGroupTopicDirs, ZkUtils}
import org.I0Itec.zkclient.ZkClient
import org.apache.spark.SparkConf
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming.dstream.{DStream, InputDStream}
import org.apache.spark.streaming.kafka.{HasOffsetRanges, KafkaUtils, OffsetRange}
import org.apache.spark.streaming.{Duration, StreamingContext}
object OrderCount {
def main(args: Array[String]): Unit = {
//指定组名
val group = "g1"
//创建SparkConf
val conf = new SparkConf().setAppName("OrderCount").setMaster("local[4]")
//创建Spark Streaming,并设置间隔时间
val ssc = new StreamingContext(conf, Duration(5000))
//获取IP规则,然后广播
val broadcastRef = IPUtils.broadcastIpRules(ssc, "C:\\Users\\刘元帅\\Desktop\\ip.txt")
//指定消费的topic名字
val topic = "orders"
//指定kafka的broker地址(sparkStream的Task直接连到kafka的分区上,用更加底层的API消费,效率更高)
val brokerList = "node1:9092,node2:9092,node3:9092"
//指定zk的地址,后期更新消费的偏移量使用(以后可以使用Redis,MySQL来记录偏移量)
val zkQuorum = "node1:2181,node2:2181,node3:2181"
//创建stream使用时topic名字集合,SparkStreaming可同时消费多个topic
val topics: Set[String] = Set(topic)
//创建一个zkGroupTopicDirs对象,其实是指定往zk中写入数据的目录,用于保存偏移量
val topicDirs = new ZKGroupTopicDirs(group, topic)
//获取zookeeper中的路径“/g001/offsets/wordcount/"
val zkTopicPath = s"${topicDirs.consumerOffsetDir}"
//准备Kafka的参数
val kafkaParams = Map(
"metadata.broker.list" -> brokerList,
"group.id" -> group,
//从头开始读取数据
"auto.offset.reset" -> kafka.api.OffsetRequest.SmallestTimeString
//“key.deserializer" -> classOf[StringDeserializer],
//"value.deserializer" -> classof[StringDeserializer],
//"deserializer.encoding" -> "GB2312" //配置读取Kafka中数据的编码
)
//zookeeper的host和IP,创建一个client,用于更新偏移量的
//是zookeeper的客户端,可以从zk中读取偏移量数据,并更新偏移量
val zkClient = new ZkClient(zkQuorum)
//查询该路径下是否子节点(默认有子节点为我们自己保存不同partition时生成的)
// /g001/offsets/wordcount/0/10001"
// /g001/offsets/wordcount/1/30001"
// /g001/offsets/wordcount/2/10001"
// zkTopicPath -> /g001/offsets/wordcount/
val children = zkClient.countChildren(zkTopicPath)
var kafkaStream: InputDStream[(String, String)] = null
//如果zookeeper中有保存offset,我们会利用这个offset作为kafkaStream的起始位置
var fromOffsets: Map[TopicAndPartition, Long] = Map()
//如果保存过offset
if(children > 0){
for(i <- 0 until children){
// /g001/offsets/wordcount/0/10001
// /g001/offsets/wordcount/0
val partitionOffset = zkClient.readData[String](s"$zkTopicPath/${i}")
// wordcount/0
val tp = TopicAndPartition(topic, i)
//将不同 partition 对应的 offset 增加到 fromOffsets 中
// wordcount/0 -> 10001
fromOffsets += (tp -> partitionOffset.toLong)
}
//Key: kafka的key values: "hello tom hello jerry"
//这个会将 kafka 的消息进行 transform,最终 kafak 的数据都会变成 (kafka的key, message) 这样的 tuple
val messageHandler = (mmd: MessageAndMetadata[String, String]) => (mmd.key(), mmd.message())
//通过KafkaUtils创建直连的DStream(fromOffsets参数的作用是:按照前面计算好了的偏移量继续消费数据)
//[String, String, StringDecoder, StringDecoder, (String, String)]
// key value key的解码方式 value的解码方式
kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder, (String, String)](ssc, kafkaParams, fromOffsets, messageHandler)
} else {
//如果未保存,根据 kafkaParam 的配置使用最新(largest)或者最旧的(smallest) offset
kafkaStream = KafkaUtils.createDirectStream[String, String, StringDecoder, StringDecoder](ssc, kafkaParams, topics)
}
//偏移量的范围
var offsetRanges = Array[OffsetRange]()
//直连方式只有在KafkaDStream的RDD中才能获取偏移量,那么就不能到调用DStream的Transformation
//依次迭代DStream中的RDD
//kafkaStream.foreachRDD是在Driver端执行的。
kafkaStream.foreachRDD { kafkaRDD =>
//判断kafka中是否有数据
if(!kafkaRDD.isEmpty()){
//只有KafkaRDD可以强转成HasOffsetRanges,并获取偏移量
offsetRanges = kafkaRDD.asInstanceOf[HasOffsetRanges].offsetRanges
val lines: RDD[String] = kafkaRDD.map(_._2)
//整理数据
val field: RDD[Array[String]] = lines.map(_.split(" "))
//计算成交金额
CalculateUtil.calculateIncome(field)
//计算商品分类金额
CalculateUtil.calculateItem(field)
//计算
CalculateUtil.calculateZone(field, broadcastRef)
for (o <- offsetRanges) {
// /g001/offsets/wordcount/0
val zkPath = s"${topicDirs.consumerOffsetDir}/${o.partition}"
//将该 partition 的 offset 保存到 zookeeper
// /g001/offsets/wordcount/0/20000
ZkUtils.updatePersistentPath(zkClient, zkPath, o.untilOffset.toString)
}
}
}
ssc.start()
ssc.awaitTermination()
}
}
package day10
import day4.MyUtils
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
object CalculateUtil {
//计算总金额
def calculateIncome(field: RDD[Array[String]]) = {
//将数据计算后写入到redis
val priceRDD: RDD[Double] = field.map(arr => {
val price = arr(4).toDouble
price
})
//reduce是一个Action,会把结果返回到Driver端
//将当前批次的总金额返回了
val sum: Double = priceRDD.reduce(_+_)
//获取一个jedis连接
val conn = JedisConnectionPool.getConnection()
//将历史值和当前的值进行累加
conn.set(Constant.TOTAL_INCOME, sum.toString)
conn.incrByFloat(Constant.TOTAL_INCOME, sum)
//释放连接
conn.close()
}
//按商品分类金额
def calculateItem(field: RDD[Array[String]]) = {
//对field的map方法是在那一端调用的呢?Driver
val itemAndPrice:RDD[(String, Double)] = field.map(arr => {
//分类
val item = arr(2)
//金额
val parice = arr(4).toDouble
(item, parice)
})
//按商品分类进行聚合
val reduced: RDD[(String, Double)] = itemAndPrice.reduceByKey(_+_)
//将当前其次的数据累加到Redis中
//foreachPartition是一个Action
//现在这种方式jedis的连接是在那一端创建的,Driver
//在Driver端拿jedis连接不好
//val conn = JedisConnectionPool.getConnection()
reduced.foreachPartition(part => {
//获取一个jedis连接,
//这个连接其实是在Executor中获取的
//JedisConnectPool在一个Executor进程中只有个一份
val conn = JedisConnectionPool.getConnection()
part.foreach(t => {
conn.incrByFloat(t._1, t._2)
})
//将当前分区中的数据更新完再关闭连接
conn.close()
})
}
//计算区域成交金额
def calculateZone(field: RDD[Array[String]], broadcastRef: Broadcast[Array[(Long, Long, String)]]) = {
val provinceAndPrice: RDD[(String, Double)] = field.map(arr => {
val ip = arr(1)
val price = arr(4).toDouble
val ipNum = MyUtils.ip2Long(ip)
//在Executor中获取到广播的全部规则
val allRules: Array[(Long, Long, String)] = broadcastRef.value
//二分法查找
val index = MyUtils.binarySearch(allRules, ipNum)
var province = "未知"
if(index != -1){
province = allRules(index)._3
}
//省份,订单金额
(province, price)
})
//按省份进行聚合
val reduced: RDD[(String, Double)] = provinceAndPrice.reduceByKey(_+_)
//将数据更新到Redis
reduced.foreachPartition(part => {
val conn = JedisConnectionPool.getConnection()
part.foreach(t => {
conn.incrByFloat(t._1, t._2)
})
conn.close()
})
}
}
package day10
import org.apache.spark.broadcast.Broadcast
import org.apache.spark.rdd.RDD
import org.apache.spark.streaming.StreamingContext
object IPUtils {
def broadcastIpRules(ssc: StreamingContext, ipRulesPath: String) = {
//现在获取sparkContext
val sc = ssc.sparkContext
val rulesLines:RDD[String] = sc.textFile(ipRulesPath)
//整理IP规则数据
val ipRulesRDD: RDD[(Long, Long, String)] = rulesLines.map(line => {
val fields = line.split("[|]")
val startNum = fields(2).toLong
val endNum = fields(3).toLong
val province = fields(6)
(startNum, endNum, province)
})
//将分散在多个Executor中的部分数据IP规则收集到Driver端
val rulesInDriver: Array[(Long, Long, String)] = ipRulesRDD.collect()
//将Driver端的数据广播到Executor中
//调用sc上的关播方法
//广播变量的引用(还在Driver端)
val broadcastRef: Broadcast[Array[(Long, Long, String)]] = sc.broadcast(rulesInDriver)
broadcastRef
}
}
package day4
import java.sql.{Connection, DriverManager}
import scala.io.{BufferedSource, Source}
object MyUtils{
def data2MySQL(it: Iterator[(String, Int)]): Unit = {
val conn: Connection = DriverManager.getConnection("jdbc:mysql://localhost:3306/bigdata?characterEncoding=UTF-8","root","123")
val pstm = conn.prepareStatement("insert into access_log values (?,?)")
//将一个分区中的每一条数据拿出来
it.foreach(tp => {
pstm.setString(1, tp._1)
pstm.setInt(2, tp._2)
pstm.executeUpdate()
})
pstm.close()
conn.close()
}
//将IP地址转换成十进制
def ip2Long(ip: String): Long = {
val fragments = ip.split("[.]")
var ipNum = 0L
for(i <- 0 until fragments.length){
ipNum = fragments(i).toLong | ipNum << 8L
}
ipNum
}
//将规则读入内存中
def readRules(path: String): Array[(Long, Long, String)] = {
//读取IP规则
val bf: BufferedSource = Source.fromFile(path)
val lines: Iterator[String] = bf.getLines()
//对IP规则进行整理,放入内存中
val rules: Array[(Long, Long, String)] = lines.map(line => {
val fileds: Array[String] = line.split("[|]")
val startNum = fileds(2).toLong
val endNum = fileds(3).toLong
val province = fileds(6)
(startNum, endNum, province)
}).toArray
rules
}
//二分法查找
def binarySearch(lines: Array[(Long, Long, String)], ip: Long): Int = {
var low = 0
var high = lines.length - 1
while(low <= high){
val middle = (low + high) / 2
if((ip >= lines(middle)._1) && (ip <= lines(middle)._2))
return middle
if(ip < lines(middle)._1)
high = middle -1
else
low = middle + 1
}
-1
}
}
spark-on-yarn
目的:使Spark程序运行在yarn上,有两种模式。
cluster模式
启动
/appdata/spark/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode cluster --driver-memory 1g --executor-memory 1g --executor-cores 2 --queue default /appdata/spark/examples/jars/spark-examples_2.11-2.3.3.jar 10
/appdata/spark/bin/spark-submit --class cn.edu360.spark.day1.WordCount --master yarn --deploy-mode cluster --driver-memory 1g --executor-memory 1g --executor-cores 2 --queue default ./hello-spark-1.0.jar hdfs://node1:9000/wc hdfs://node1:9000/out-yarn-1
观察进程状态
提交spark-submit程序的主机
其他主机
原理解析图
如果使用spark on yarn 提交任务,一般情况,都使用cluster模式,该模式,Driver运行在集群中,其实就是运行在ApplicattionMaster这个进程成,如果该进程出现问题,yarn会重启ApplicattionMaster(Driver),SparkSubmit的功能就是为了提交任务。
Spark Driver首先作为一个ApplicationMaster在YARN集群中启动,客户端提交给ResourceManager的每一个job都会在集群的NodeManager节点上分配一个唯一的ApplicationMaster,由该ApplicationMaster管理全生命周期的应用。具体过程:
- 由client向ResourceManager提交请求,并上传jar到HDFS上
这期间包括四个步骤:
a).连接到RM
b).从RM的ASM(ApplicationsManager )中获得metric、queue和resource等信息。
c). upload app jar and spark-assembly jar
d).设置运行环境和container上下文(launch-container.sh等脚本) - ResouceManager向NodeManager申请资源,创建Spark ApplicationMaster(每个SparkContext都有一个ApplicationMaster)
- NodeManager启动ApplicationMaster,并向ResourceManager AsM注册
- ApplicationMaster从HDFS中找到jar文件,启动SparkContext、DAGscheduler和YARN Cluster Scheduler
- ResourceManager向ResourceManager AsM注册申请container资源
- ResourceManager通知NodeManager分配Container,这时可以收到来自ASM关于container的报告。(每个container对应一个executor)
- Spark ApplicationMaster直接和container(executor)进行交互,完成这个分布式任务。
client模式
/appdata/spark/bin/spark-submit --class org.apache.spark.examples.SparkPi --master yarn --deploy-mode client --driver-memory 1g --executor-memory 1g --executor-cores 2 --queue default lib/spark-examples*.jar 10
/appdata/spark/bin/spark-shell --master yarn --deploy-mode client --driver-memory 1g --executor-memory 1g --executor-cores 2 --queue default
如果使用交换式的命令行,必须用Client模式,该模式,Driver是运行在SparkSubmit进程中,因为收集的结果,必须返回到命令行(即启动命令的那台机器上),该模式,一般测试,或者运行spark-shell、spark-sql这个交互式命令行是使用
进程查看
启动SparkShell的主机
其他主机
client模式:
在client模式下,Driver运行在Client上,通过ApplicationMaster向RM获取资源。本地Driver负责与所有的executor container进行交互,并将最后的结果汇总。结束掉终端,相当于kill掉这个spark应用。一般来说,如果运行的结果仅仅返回到terminal上时需要配置这个。
客户端的Driver将应用提交给Yarn后,Yarn会先后启动ApplicationMaster和executor,另外ApplicationMaster和executor都 是装载在container里运行,container默认的内存是1G,ApplicationMaster分配的内存是driver- memory,executor分配的内存是executor-memory。同时,因为Driver在客户端,所以程序的运行结果可以在客户端显 示,Driver以进程名为SparkSubmit的形式存在。
注意:如果你配置spark-on-yarn的client模式,其实会报错。
修改所有yarn节点的yarn-site.xml,在该文件中添加如下配置
<property>
<name>yarn.nodemanager.pmem-check-enabled</name>
<value>false</value>
</property>
<property>
<name>yarn.nodemanager.vmem-check-enabled</name>
<value>false</value>
</property>
两种模式的区别
cluster模式:Driver程序在YARN中运行,应用的运行结果不能在客户端显示,所以最好运行那些将结果最终保存在外部存储介质(如HDFS、Redis、Mysql)而非stdout输出的应用程序,客户端的终端显示的仅是作为YARN的job的简单运行状况。
client模式:Driver运行在Client上,应用程序运行结果会在客户端显示,所有适合运行结果有输出的应用程序(如spark-shell)