Akka模拟实现Spark中Master Worker 进程通讯

0.引入akka依赖

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.w4xj.scala</groupId>
    <artifactId>akka</artifactId>
    <version>1.0-SNAPSHOT</version>


    <!-- 定义一下常量 -->
    <properties>
        <encoding>UTF-8</encoding>
        <scala.version>2.11.8</scala.version>
        <scala.compat.version>2.11</scala.compat.version>
        <akka.version>2.4.17</akka.version>
    </properties>

    <dependencies>
        <!-- 添加scala的依赖 -->
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-library</artifactId>
            <version>${scala.version}</version>
        </dependency>

        <!-- 添加akka的actor依赖 -->
        <dependency>
            <groupId>com.typesafe.akka</groupId>
            <artifactId>akka-actor_${scala.compat.version}</artifactId>
            <version>${akka.version}</version>
        </dependency>

        <!-- 多进程之间的Actor通信 -->
        <dependency>
            <groupId>com.typesafe.akka</groupId>
            <artifactId>akka-remote_${scala.compat.version}</artifactId>
            <version>${akka.version}</version>
        </dependency>
    </dependencies>

    <!-- 指定插件-->
    <build>
        <!-- 指定源码包和测试包的位置 -->
        <sourceDirectory>src/main/scala</sourceDirectory>
        <testSourceDirectory>src/test/scala</testSourceDirectory>
        <plugins>
            <!-- 指定编译scala的插件 -->
            <plugin>
                <groupId>net.alchim31.maven</groupId>
                <artifactId>scala-maven-plugin</artifactId>
                <version>3.2.2</version>
                <executions>
                    <execution>
                        <goals>
                            <goal>compile</goal>
                            <goal>testCompile</goal>
                        </goals>
                        <configuration>
                            <args>
                                <arg>-dependencyfile</arg>
                                <arg>${project.build.directory}/.scala_dependencies</arg>
                            </args>
                        </configuration>
                    </execution>
                </executions>
            </plugin>

            <!-- maven打包的插件 -->
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-shade-plugin</artifactId>
                <version>2.4.3</version>
                <executions>
                    <execution>
                        <phase>package</phase>
                        <goals>
                            <goal>shade</goal>
                        </goals>
                        <configuration>
                            <filters>
                                <filter>
                                    <artifact>*:*</artifact>
                                    <excludes>
                                        <exclude>META-INF/*.SF</exclude>
                                        <exclude>META-INF/*.DSA</exclude>
                                        <exclude>META-INF/*.RSA</exclude>
                                    </excludes>
                                </filter>
                            </filters>
                            <transformers>
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
                                    <resource>reference.conf</resource>
                                </transformer>
                                <!-- 指定main方法 -->
                                <transformer
                                        implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
                                    <mainClass>xxx</mainClass>
                                </transformer>
                            </transformers>
                        </configuration>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>


</project>

1.Spark Master Worker 进程通讯案例(注册)

[1].需求分析

①.worker注册到Master, Master完成注册,并回复worker注册成功

②.worker定时发送心跳,并在Master接收到

③.Master接收到worker心跳后,要更新该worker的最近一次发送心跳的时间

④.给Master启动定时任务,定时检测注册的worker有哪些没有更新心跳,并将其从hashmap中删除

⑤.master worker 进行分布式部署(Linux系统)-》如何给maven项目打包->上传linux

[2].代码

①.传输协议(序列化类、实体类)

package com.w4xj.scala.sparkregister


/**

  * worker注册信息

  */

case class RegisterWorkerInfo(id: String, cpu: Int, ram: Int)


/**

  * 这个是WorkerInfo, 保存在master的hashmap中的

   * @param id

  * @param cpu

  * @param ram

  */

class WorkerInfo(val id: String, val cpu: Int, val ram: Int)


/**

  * 返回注册成功信息

  */

case object RegisteredWorkerInfo

②.Master

package com.w4xj.scala.sparkregister


import akka.actor.{Actor, ActorSystem, Props}

import com.typesafe.config.ConfigFactory


class SparkMaster extends Actor { //SparkMaster.scala

  val workers = collection.mutable.HashMap[String, WorkerInfo]()


  override def receive: Receive = {

    case "start" => println("master服务,启动并开始监听端口....")



    case RegisterWorkerInfo(workerId, cpu, ram) => {

      if (!workers.contains(workerId)) {

        println(workerId + " 注册ok.... ")

        val workerInfo = new WorkerInfo(workerId, cpu, ram)

        workers += ((workerId, workerInfo))

        sender() ! RegisteredWorkerInfo

      }

    }

  }

}


object SparkMaster {

  def main(args: Array[String]): Unit = {

    val config = ConfigFactory.parseString(

      s"""

         |akka.actor.provider="akka.remote.RemoteActorRefProvider"

         |akka.remote.netty.tcp.hostname=127.0.0.1

         |akka.remote.netty.tcp.port=10001

            """.stripMargin)

    val actorSystem = ActorSystem("sparkMaster", config)

    val masterActorRef = actorSystem.actorOf(Props[SparkMaster], "master-01")

    masterActorRef ! "start"

  }

}

③.Worker

package com.w4xj.scala.sparkregister



import java.util.UUID



import akka.actor.{Actor, ActorSelection, ActorSystem, Props}

import com.typesafe.config.ConfigFactory



class SparkWorker(masterUrl: String) extends Actor {

  var masterProxy: ActorSelection = _

  val workerId = UUID.randomUUID().toString



  override def preStart(): Unit = {

    masterProxy = context.actorSelection(masterUrl)

  }



  override def receive: Receive = {

    case "start" => { // 自己已就绪

      println(workerId + " 向master发出注册信息... ")

      masterProxy ! RegisterWorkerInfo(workerId, 1, 64 * 1024) //

    }

    case RegisteredWorkerInfo => {

      println(workerId + " 向master注册成功了... ")

    }

  }

}



object SparkWorker {

  def main(args: Array[String]): Unit = {

    val host = "127.0.0.1"

    val port = 10002

    val masterURL = "akka.tcp://sparkMaster@127.0.0.1:10001/user/master-01"

    val workerName = "worker-01"

    val config = ConfigFactory.parseString(

      s"""

         |akka.actor.provider="akka.remote.RemoteActorRefProvider"

         |akka.remote.netty.tcp.hostname=127.0.0.1

         |akka.remote.netty.tcp.port=10002

            """.stripMargin)

    val actorSystem = ActorSystem("sparkWorker", config)

    val workerActorRef = actorSystem.actorOf(Props(new SparkWorker(masterURL)), workerName)

    workerActorRef ! "start"

  }

}

[3].运行

①.先启动Master

②.再启动Worker注册到Master

3cf12045-bdc6-4ee7-be21-fe8f50fd7c89 向master发出注册信息...

[WARN] [SECURITY][05/01/2019 18:48:35.656] [sparkWorker-akka.remote.default-remote-dispatcher-6] [akka.serialization.Serialization(akka://sparkWorker)] Using the default Java serializer for class [com.w4xj.scala.sparkregister.RegisterWorkerInfo] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'

3cf12045-bdc6-4ee7-be21-fe8f50fd7c89 向master注册成功了...

11.Spark Master Worker 进程通讯案例(worker定时发送心跳)

[1].需求分析

①.worker注册成功后启动定时器,向master定时发送心跳

②.master接收到worker的心跳,更新worker的最新心跳时间

[2].实现

①.新增协议(样例类)

/**

  * 发送心跳的指令

  */

case object SendHeartBeat


/**

  * 心跳协议

  * @param id

  */

case class HearBeat(id: String)

②.修改WorkerInfo

class WorkerInfo(val id: String, val cpu: Int, val ram: Int){

// 新增最新心跳时间

var lastHeartBeatTime: Long = _

}

③.worker注册完成后启动定时器

case RegisteredWorkerInfo => {

  println(workerId + " 向master注册成功了... ")

  println(workerId + " 准备开始定时发送心跳消息给master... ")

  //启动定时器向自己发送【发送心跳的指令】

  import context.dispatcher

  //0 millis:无延时立即启动

  //3000 millis:每3秒执行一次定时器

  //self:向自己发送

  //SendHeartBeat:发送心跳的指令

  context.system.scheduler.schedule(0 millis, 3000 millis, self, SendHeartBeat)

}

④.向master发送心跳

//向master发送心跳

case SendHeartBeat => {

  // 向master发送心跳了

  println(s"------- $workerId 发送心跳-------")

  masterProxy ! HearBeat(workerId)

}

⑤.master接收到心跳,更新最新心跳时间

//worker向服务端发送心跳

case HearBeat(workerId) => {

  val workerInfo = workers(workerId)

  workerInfo.lastHeartBeatTime = System.currentTimeMillis()

  println(s"master:${workerId} 更新了心跳时间...")

}

[3].运行效果

①.worker

7cceb8c1-9543-47f3-b171-8a73f2620d79 准备开始定时发送心跳消息给master...

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

[WARN] [SECURITY][05/02/2019 08:05:45.367] [sparkWorker-akka.remote.default-remote-dispatcher-9] [akka.serialization.Serialization(akka://sparkWorker)] Using the default Java serializer for class [com.w4xj.scala.sparkregister.HearBeat] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

------- 7cceb8c1-9543-47f3-b171-8a73f2620d79 发送心跳-------

②.master

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

master:7cceb8c1-9543-47f3-b171-8a73f2620d79 更新了心跳时间...

2.Spark Master Worker 进程通讯案例(Master定时检测worker的存活)

[1].需求:Master启动定时任务,定时检测注册的worker有哪些没有更新心跳,已经超时的worker,将其从hashmap中删除掉

[2].实现

①.新增2个协议

/**

  * master给自己发送一个触发检查超时worker的信息

  */

case object StartTimeOutWorker



/**

  * master给自己发消息,检测worker,移除超时的worker

  */

case object RemoveTimeOutWorker

②.启动master的同时开启定时调度(直接或间接)

case "start" => {

  println("master服务,启动并开始监听端口....")


  //发送启动调度器消息,其实这里可直接启动调度器,spark这里是单独发送了一个消息触发定时器

  self ! StartTimeOutWorker

}


// 开启定时器,每隔一定时间检测是否有worker的心跳超时

case StartTimeOutWorker => {

  // 使用调度器时候必须导入dispatcher

  import context.dispatcher

  context.system.scheduler.schedule(0 millis, 9000 millis, self, RemoveTimeOutWorker)

}

③.检测超时的worker,并移除

case RemoveTimeOutWorker => {

  val workerInfos = workers.values

  val currentTime = System.currentTimeMillis()

  // 过滤出心跳超时的worker,然后移除掉

  workerInfos

    .filter(workerInfo => currentTime - workerInfo.lastHeartBeatTime > 6000)

    .foreach(workerInfo => workers.remove(workerInfo.id))

  println(s"-----还剩 ${workers.size} 存活的Worker-----")

}

[3].运行效果

master服务,启动并开始监听端口....

-----还剩 0 存活的Worker-----

-----还剩 0 存活的Worker-----

0b557f37-d2e5-4018-9884-16851f7dfe25 注册ok....

[WARN] [SECURITY][05/02/2019 08:38:08.846] [sparkMaster-akka.remote.default-remote-dispatcher-8] [akka.serialization.Serialization(akka://sparkMaster)] Using the default Java serializer for class [com.w4xj.scala.sparkregister.RegisteredWorkerInfo$] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

-----还剩 1 存活的Worker-----

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

-----还剩 1 存活的Worker-----

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

master:0b557f37-d2e5-4018-9884-16851f7dfe25 更新了心跳时间...

[WARN] [05/02/2019 08:38:33.503] [sparkMaster-akka.remote.default-remote-dispatcher-9] [akka.tcp://sparkMaster@127.0.0.1:10001/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FsparkWorker%40127.0.0.1%3A10002-0] Association with remote system [akka.tcp://sparkWorker@127.0.0.1:10002] has failed, address is now gated for [5000] ms. Reason: [Disassociated]

-----还剩 1 存活的Worker-----

-----还剩 0 存活的Worker-----

3.Spark Master Worker 进程通讯案例(提取运行参数)

[1].修改后的Master

package com.w4xj.scala.sparkregister


import akka.actor.{Actor, ActorSystem, Props}
import com.typesafe.config.ConfigFactory
import scala.concurrent.duration._



class SparkMaster extends Actor { //SparkMaster.scala
  val workers = collection.mutable.HashMap[String, WorkerInfo]()


  override def receive: Receive = {
    case "start" => {
      println("master服务,启动并开始监听端口....")
      //发送启动调度器消息,其实这里可直接启动调度器,spark这里是单独发送了一个消息触发定时器
      self ! StartTimeOutWorker
    }



    //worker向服务端发送注册信息

    case RegisterWorkerInfo(workerId, cpu, ram) => {
      if (!workers.contains(workerId)) {
        println(workerId + " 注册ok.... ")
        val workerInfo = new WorkerInfo(workerId, cpu, ram)
        workers += ((workerId, workerInfo))
        sender() ! RegisteredWorkerInfo
      }
    }

    //worker向服务端发送心跳
    case HearBeat(workerId) => {
      val workerInfo = workers(workerId)
      workerInfo.lastHeartBeatTime = System.currentTimeMillis()
      println(s"master:${workerId} 更新了心跳时间...")
    }

    // 开启定时器,每隔一定时间检测是否有worker的心跳超时

    case StartTimeOutWorker => {
      // 使用调度器时候必须导入dispatcher
      import context.dispatcher
      context.system.scheduler.schedule(0 millis, 9000 millis, self, RemoveTimeOutWorker)
    }

    case RemoveTimeOutWorker => {
      val workerInfos = workers.values
      val currentTime = System.currentTimeMillis()
      // 过滤出心跳超时的worker,然后移除掉
      workerInfos
        .filter(workerInfo => currentTime - workerInfo.lastHeartBeatTime > 6000)
        .foreach(workerInfo => workers.remove(workerInfo.id))
      println(s"-----还剩 ${workers.size} 存活的Worker-----")
    }

  }

}


object SparkMaster {
  def main(args: Array[String]): Unit = {
    if(args.length != 3) {
      println("请输入参数:masterHost MasterPort masterName")
      sys.exit() // 退出程序
    }

    val masterHost = args(0)
    val MasterPort = args(1)
    val masterName = args(2)
    val config = ConfigFactory.parseString(
      s"""
         |akka.actor.provider="akka.remote.RemoteActorRefProvider"
         |akka.remote.netty.tcp.hostname=${masterHost}
         |akka.remote.netty.tcp.port=${MasterPort}
            """.stripMargin)
    val actorSystem = ActorSystem("sparkMaster", config)
    val masterActorRef = actorSystem.actorOf(Props[SparkMaster], masterName)
    masterActorRef ! "start"

  }
}

[2].修改后的Worker

package com.w4xj.scala.sparkregister

import java.util.UUID
import scala.concurrent.duration._
import akka.actor.{Actor, ActorSelection, ActorSystem, Props}
import com.typesafe.config.ConfigFactory


class SparkWorker(masterHost: String, masterPort: String, masterName: String) extends Actor {
  var masterProxy: ActorSelection = _
  val workerId = UUID.randomUUID().toString

  override def preStart(): Unit = {
    masterProxy = context.actorSelection(s"akka.tcp://sparkMaster@${masterHost}:${masterPort}/user/${masterName}")

  }

  override def receive: Receive = {
    case "start" => { // 自己已就绪
      println(workerId + " 向master发出注册信息... ")
      masterProxy ! RegisterWorkerInfo(workerId, 1, 64 * 1024) //
    }

    case RegisteredWorkerInfo => {
      println(workerId + " 向master注册成功了... ")
      println(workerId + " 准备开始定时发送心跳消息给master... ")
      //启动定时器向自己发送【发送心跳的指令】
      import context.dispatcher
      //0 millis:无延时立即启动
      //3000 millis:每3秒执行一次定时器
      //self:向自己发送
      //SendHeartBeat:发送心跳的指令
      context.system.scheduler.schedule(0 millis, 3000 millis, self, SendHeartBeat)

    }

    //向master发送心跳
    case SendHeartBeat => {
      // 向master发送心跳了
      println(s"------- $workerId 发送心跳-------")
      masterProxy ! HearBeat(workerId)

    }

  }

}


object SparkWorker {
  def main(args: Array[String]): Unit = {
    if (args.length != 6) {
      println("请输入参数:workerHost workerPort workerName masterHost masterPort masterName")
      sys.exit() // 退出程序

    }

    val workerHost = args(0)
    val workerPort = args(1)
    val workerName = args(2)
    val masterHost = args(3)
    val masterPort = args(4)
    val masterName = args(5)
    val config = ConfigFactory.parseString(
      s"""
         |akka.actor.provider="akka.remote.RemoteActorRefProvider"
         |akka.remote.netty.tcp.hostname=${workerHost}
         |akka.remote.netty.tcp.port=${workerPort}
            """.stripMargin)

    val actorSystem = ActorSystem("sparkWorker", config)
    val workerActorRef = actorSystem.actorOf(Props(new SparkWorker(masterHost, masterPort, masterName)), workerName)
    workerActorRef ! "start"
  }

}

[3].测试

①.先启动Master

127.0.0.1
9999
masterName

②.启动worker1

127.0.0.1
9001
workerName1
127.0.0.1
9999
masterName

③.启动worker2

127.0.0.1
9002
workerName2
127.0.0.1
9999
masterName

④.启动worker3

127.0.0.1
9003
workerName3
127.0.0.1
9999
masterName

⑤.依次关闭worker

⑥.运行结果

master服务,启动并开始监听端口....

-----还剩 0 存活的Worker-----

-----还剩 0 存活的Worker-----

-----还剩 0 存活的Worker-----

-----还剩 0 存活的Worker-----

a8bd90e6-f756-45a2-9c3b-b43523025e22 注册ok....

[WARN] [SECURITY][05/02/2019 09:16:51.527] [sparkMaster-akka.remote.default-remote-dispatcher-9] [akka.serialization.Serialization(akka://sparkMaster)] Using the default Java serializer for class [com.w4xj.scala.sparkregister.RegisteredWorkerInfo$] which is not recommended because of performance implications. Use another serializer or disable this warning using the setting 'akka.actor.warn-about-java-serializer-usage'

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

-----还剩 1 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

35ab4ef6-06ac-4fc5-b39d-c62a90c97004 注册ok....

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 2 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 2 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 2 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

b2baf4ca-0717-4d29-824a-2ab1027465b8 注册ok....

master:b2baf4ca-0717-4d29-824a-2ab1027465b8 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:b2baf4ca-0717-4d29-824a-2ab1027465b8 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 3 存活的Worker-----

master:b2baf4ca-0717-4d29-824a-2ab1027465b8 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:b2baf4ca-0717-4d29-824a-2ab1027465b8 更新了心跳时间...

[WARN] [05/02/2019 09:17:47.189] [sparkMaster-akka.remote.default-remote-dispatcher-10] [akka.tcp://sparkMaster@127.0.0.1:9999/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FsparkWorker%40127.0.0.1%3A9003-2] Association with remote system [akka.tcp://sparkWorker@127.0.0.1:9003] has failed, address is now gated for [5000] ms. Reason: [Disassociated]

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 3 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

-----还剩 2 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:35ab4ef6-06ac-4fc5-b39d-c62a90c97004 更新了心跳时间...

[WARN] [05/02/2019 09:18:04.505] [sparkMaster-akka.remote.default-remote-dispatcher-10] [akka.tcp://sparkMaster@127.0.0.1:9999/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FsparkWorker%40127.0.0.1%3A9002-1] Association with remote system [akka.tcp://sparkWorker@127.0.0.1:9002] has failed, address is now gated for [5000] ms. Reason: [Disassociated]

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

-----还剩 1 存活的Worker-----

master:a8bd90e6-f756-45a2-9c3b-b43523025e22 更新了心跳时间...

[WARN] [05/02/2019 09:18:13.081] [sparkMaster-akka.remote.default-remote-dispatcher-10] [akka.tcp://sparkMaster@127.0.0.1:9999/system/endpointManager/reliableEndpointWriter-akka.tcp%3A%2F%2FsparkWorker%40127.0.0.1%3A9001-0] Association with remote system [akka.tcp://sparkWorker@127.0.0.1:9001] has failed, address is now gated for [5000] ms. Reason: [Disassociated]

-----还剩 0 存活的Worker-----

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值