HBase 0.99 源代码分析 - Master启动过程(1)

HBase版本更新速度比较快,网上已经有很多HBase以前版本源码分析的文章,但在最近的版本中,master启动过程有较大变化,本文基于目前dev版本0.99(1.0尚未release, 当前最新release版本为0.98)分析学习HBase Master服务器启动过程。

从HBase启动脚本 (start-hbase.sh, hbase-daemon.sh, hbase) 可以分析出HBase Master启动的入口函数是org.apache.hadoop.hbase.master.HMaster的main函数,并传递命令行参数"start".该函数定义如下:

  public static void main(String [] args) {
    VersionInfo.logVersion();
    new HMasterCommandLine(HMaster.class).doMain(args);
  }

HMasterCommandLine继承自ServerCommandLine类,用于解析HBase Master server命令行参数,并启动Master线程。它的doMain方法的核心语句是:

int ret = ToolRunner.run(HBaseConfiguration.create(), this,args);

调用hadoop公共工具类ToolRunner处理配置信息,ToolRunner类的run方法会回调HMasterCommandLine的run方法。

通过HMasterCommandLine类,ServerCommandLine类和ToolRunner类,处理HBase及Hadoop配置信息及命令行参数的代码得以重用,不用每种类型的server单独处理自己的配置信息和命令行参数。

接着进入HMasterCommandLine类的run方法:

首先检测如下命令行参数选项并将其转换成对应的配置信息:

参数配置
minRegionServers hbase.regions.server.count.min
minServers hbase.regions.server.count.min
localRegionServers hbase.regionservers
masters hbase.masters
backup hbase.master.backup

然后调用HMasterCommandLine.startMaster()方法。

  private int startMaster() {
    Configuration conf = getConf();
    try {
      // If 'local', defer to LocalHBaseCluster instance.  Starts master
      // and regionserver both in the one JVM.
      if (LocalHBaseCluster.isLocal(conf)) {
        DefaultMetricsSystem.setMiniClusterMode(true);
        final MiniZooKeeperCluster zooKeeperCluster = new MiniZooKeeperCluster(conf);
        File zkDataPath = new File(conf.get(HConstants.ZOOKEEPER_DATA_DIR));
        int zkClientPort = conf.getInt(HConstants.ZOOKEEPER_CLIENT_PORT, 0);
        if (zkClientPort == 0) {
          throw new IOException("No config value for "
              + HConstants.ZOOKEEPER_CLIENT_PORT);
        }
        zooKeeperCluster.setDefaultClientPort(zkClientPort);

        // login the zookeeper server principal (if using security)
        ZKUtil.loginServer(conf, "hbase.zookeeper.server.keytab.file",
          "hbase.zookeeper.server.kerberos.principal", null);

        int clientPort = zooKeeperCluster.startup(zkDataPath);
        if (clientPort != zkClientPort) {
          String errorMsg = "Could not start ZK at requested port of " +
            zkClientPort + ".  ZK was started at port: " + clientPort +
            ".  Aborting as clients (e.g. shell) will not be able to find " +
            "this ZK quorum.";
          System.err.println(errorMsg);
          throw new IOException(errorMsg);
        }
        conf.set(HConstants.ZOOKEEPER_CLIENT_PORT,
                 Integer.toString(clientPort));
        conf.setInt(HConstants.ZK_SESSION_TIMEOUT, 10 *1000);
        // Need to have the zk cluster shutdown when master is shutdown.
        // Run a subclass that does the zk cluster shutdown on its way out.
        LocalHBaseCluster cluster = new LocalHBaseCluster(conf, conf.getInt("hbase.masters", 1),
          conf.getInt("hbase.regionservers", 1), LocalHMaster.class, HRegionServer.class);
        ((LocalHMaster)cluster.getMaster(0)).setZKCluster(zooKeeperCluster);
        cluster.startup();
        waitOnMasterThreads(cluster);
      } else {
        logProcessInfo(getConf());
        CoordinatedStateManager csm =
          CoordinatedStateManagerFactory.getCoordinatedStateManager(conf);
        HMaster master = HMaster.constructMaster(masterClass, conf, csm);
        if (master.isStopped()) {
          LOG.info("Won't bring the Master up as a shutdown is requested");
          return 1;
        }
        master.start();
        master.join();
        if(master.isAborted())
          throw new RuntimeException("HMaster Aborted");
      }
    } catch (Throwable t) {
      LOG.error("Master exiting", t);
      return 1;
    }
    return 0;
  }

startMaster方法首先读取配置参数hbase.cluster.distributed:

1) 如果该值为false,则HBase为standalone(非分布式)模式,该模式下,master线程和region server线程会在同一个JVM内启动。先调用MiniZooKeeperCluster启动ZooKeeper,然后调用LocalHBaseCluster类启动Master线程和Region server线程。Master线程个数由参数hbase.masters指定,而Region server线程个数由hbase.regionservers指定。

2) 如果hbase.cluster.distributed值为true,则hbase为分布式,这里将只启动Master(Master也同时是Region server)。

首先从配置读取CoordinatedStateManager的实现类,并实例化,getCoordinatedStateManager的实现:

public static CoordinatedStateManager getCoordinatedStateManager(Configuration conf) {
    Class
   
    coordinatedStateMgrKlass=
      conf.getClass(HConstants.HBASE_COORDINATED_STATE_MANAGER_CLASS,
        ZkCoordinatedStateManager.class,CoordinatedStateManager.class);
    returnReflectionUtils.newInstance(coordinatedStateMgrKlass,conf);
  }

CoordinatedStateManager的默认实现是ZooKeeper实现。

CoordinatedStateManager类的加入是为了抽象HBase对ZooKeeper的调用,目前HBase采用ZooKeeper提供分布式一致性服务,未来HBase可能将会允许用户采用其他分布式一致性服务代替ZooKeeper.

接下来调用HMaster类的静态工厂方法constructMaster创建HMaster实例。constructMaster方法采用反射机制调用自身构造函数创建实例,这样实现的目的在于允许用户创建HMaster的子类扩展HMaster的行为。

HMaster类现在是HRegionServer的子类,这是新版本的一个重大变化,这意味着一个Master server也同时是一个Region server, 但Master只会负责较小的表和hbase:meta表,详情 参考HBASE-10569.

还有一个变化是:HBase从0.96版本开始删除了ROOT表,直接把hbase:meta表存储在ZooKeeper中,详细信息参考HBASE-3171.

HMaster构造函数:

public HMaster(final Configuration conf, CoordinatedStateManager csm)
      throws IOException, KeeperException {
    super(conf, csm);
    this.rsFatals = new MemoryBoundedLogMessageBuffer(
      conf.getLong("hbase.master.buffer.for.rs.fatals", 1*1024*1024));

    LOG.info("hbase.rootdir=" + FSUtils.getRootDir(this.conf) +
        ", hbase.cluster.distributed=" + this.conf.getBoolean(HConstants.CLUSTER_DISTRIBUTED, false));

    Replication.decorateMasterConfiguration(this.conf);

    // Hack! Maps DFSClient => Master for logs.  HDFS made this
    // config param for task trackers, but we can piggyback off of it.
    if (this.conf.get("mapreduce.task.attempt.id") == null) {
      this.conf.set("mapreduce.task.attempt.id", "hb_m_" + this.serverName.toString());
    }

    //should we check the compression codec type at master side, default true, HBASE-6370
    this.masterCheckCompression = conf.getBoolean("hbase.master.check.compression", true);

    this.metricsMaster = new MetricsMaster( new MetricsMasterWrapperImpl(this));

    // Do we publish the status?
    boolean shouldPublish = conf.getBoolean(HConstants.STATUS_PUBLISHED,
        HConstants.STATUS_PUBLISHED_DEFAULT);
    Class
   
    publisherClass =
        conf.getClass(ClusterStatusPublisher.STATUS_PUBLISHER_CLASS,
            ClusterStatusPublisher.DEFAULT_STATUS_PUBLISHER_CLASS,
            ClusterStatusPublisher.Publisher.class);

    if (shouldPublish) {
      if (publisherClass == null) {
        LOG.warn(HConstants.STATUS_PUBLISHED + " is true, but " +
            ClusterStatusPublisher.DEFAULT_STATUS_PUBLISHER_CLASS +
            " is not set - not publishing status");
      } else {
        clusterStatusPublisherChore = new ClusterStatusPublisher(this, conf, publisherClass);
        Threads.setDaemonThreadRunning(clusterStatusPublisherChore.getThread());
      }
    }
    startActiveMasterManager();
    putUpJettyServer();
  }

HMaster继承自HRegionServer, 因此首先构造HRegionServer对象。

Replication.decorateMasterConfiguration用于注册用户自定义的ReplicationLogCleaner类。

MetricsMasterWrapperImpl类实现了Master运行时统计信息。

然后值得关注的是调用startActiveMasterManager()方法.

最后启动内嵌的Jetty server, 提供Web GUI.

从Master入口方法HMaster.main到startActiveMasterManager(),经历了很多层次的调用,通过下图我们回顾一下调用层次:


接着进入startActiveMasterManager():

private void startActiveMasterManager() throws KeeperException {
    String backupZNode = ZKUtil.joinZNode(
      zooKeeper.backupMasterAddressesZNode, serverName.toString());
    /*
    * Add a ZNode for ourselves in the backup master directory since we
    * may not become the active master. If so, we want the actual active
    * master to know we are backup masters, so that it won't assign
    * regions to us if so configured.
    *
    * If we become the active master later, ActiveMasterManager will delete
    * this node explicitly.  If we crash before then, ZooKeeper will delete
    * this node for us since it is ephemeral.
    */
    LOG.info("Adding ZNode for " + backupZNode + " in backup master directory");
    MasterAddressTracker.setMasterAddress(zooKeeper, backupZNode, serverName);

    activeMasterManager = new ActiveMasterManager(zooKeeper, serverName, this);
    // Start a thread to try to become the active master, so we won't block here
    Threads.setDaemonThreadRunning(new Thread(new Runnable() {
      public void run() {
        int timeout = conf.getInt(HConstants.ZK_SESSION_TIMEOUT,
          HConstants.DEFAULT_ZK_SESSION_TIMEOUT);
        // If we're a backup master, stall until a primary to writes his address
        if (conf.getBoolean(HConstants.MASTER_TYPE_BACKUP,
            HConstants.DEFAULT_MASTER_TYPE_BACKUP)) {
          LOG.debug("HMaster started in backup mode. "
            + "Stalling until master znode is written.");
          // This will only be a minute or so while the cluster starts up,
          // so don't worry about setting watches on the parent znode
          while (!activeMasterManager.hasActiveMaster()) {
            LOG.debug("Waiting for master address ZNode to be written "
              + "(Also watching cluster state node)");
            Threads.sleep(timeout);
          }
        }
        MonitoredTask status = TaskMonitor.get().createStatus("Master startup");
        status.setDescription("Master startup");
        try {
          if (activeMasterManager.blockUntilBecomingActiveMaster(timeout, status)) {
            finishActiveMasterInitialization(status);
          }
        } catch (Throwable t) {
          status.setStatus("Failed to become active: " + t.getMessage());
          LOG.fatal("Failed to become active master", t);
          // HBASE-5680: Likely hadoop23 vs hadoop 20.x/1.x incompatibility
          if (t instanceof NoClassDefFoundError &&
              t.getMessage().contains("org/apache/hadoop/hdfs/protocol/FSConstants$SafeModeAction")) {
            // improved error message for this special case
            abort("HBase is having a problem with its Hadoop jars.  You may need to "
              + "recompile HBase against Hadoop version "
              +  org.apache.hadoop.util.VersionInfo.getVersion()
              + " or change your hadoop jars to start properly", t);
          } else {
            abort("Unhandled exception. Starting shutdown.", t);
          }
        } finally {
          status.cleanup();
        }
      }
    }, getServerName().toShortString() + ".activeMasterManager"));
  }
startActiveMasterManager方法首先向ZooKeeper注册成为backup Master, 接着创建ActiveMasterManager对象并启动一个daemon线程,Daemon线程首先检查配置项hbase.master.backup,如果为true, 则本server为backup master, 会先等待active master产生,active master产生后会进入blockUntilBecomingActiveMaster试图在其他active master down掉的情况下成为active master, 如果不能成为active master, 则backup master的daemon线程一直在此等待。 
如果hbase.master.backup为false(默认值),daemon线程将不会等待直接进入blockUntilBecomingActiveMaster方法与其他Master竞争试图成为Active Master, 如果成功成为Active Master, 则执行finishActiveMasterInitialization(status),否则daemon线程结束。同时主线程会在Daemon线程启动后不会停止,而是继续往下执行并回到HMaster构造函数继续启动Jetty server. 启动Jetty server后,主线程会回到HMasterCommandLine.startMaster()方法第49行,继续执行master.start()。

对于backup master(包括配置指定为backup master的server和配置未指定但没有在blockUntilBecomingActiveMaster方法中竞争成功的server), 至此启动过程已经完成,server中存在如下线程:
1. HRegionServer构造函数启动的RPC service等线程,通常有多个,HMaster继承自HRegionServer, 因此运行时也会成为Region server。
2. Daemon线程阻塞在blockUntilBecomingActiveMaster,试图在当前active master发生异常情况下接管成为active master。
3. master.start()启动了HRegionServer.run()进入HRegionServer主循环。
4. 主线程master.join()等待HRegionServer结束。

如果本Master成功成为Active Master, 那么到现在Master server中有下面这些线程同时运行:
1. HRegionServer构造函数启动的RPC service等线程,通常有多个,HMaster继承自HRegionServer, 因此运行时也会成为Region server。
2. Daemon线程开始执行finishActiveMasterInitialization()方法, 该方法初始化Master主功能。
3. master.start()启动了HRegionServer.run()进入HRegionServer主循环。
4. 主线程master.join()等待HRegionServer结束。

下一篇博文会继续从finishActiveMasterInitialization()方法开始分析。


  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值