Administering an HDFS High Availability Cluster

Using the haadmin command

Now that your HA NameNodes are configured and started, you will have access to some additional commands to administer your HA HDFS cluster. Specifically, you should familiarize yourself with the subcommands of the hdfs haadmin command.

This page describes high-level uses of some important subcommands. For specific usage information of each subcommand, you should run hdfs haadmin -help <command>.

failover - initiate a failover between two NameNodes

This subcommand causes a failover from the first provided NameNode to the second. If the first NameNode is in the Standby state, this command simply transitions the second to the Active state without error. If the first NameNode is in the Active state, an attempt will be made to gracefully transition it to the Standby state. If this fails, the fencing methods (as configured by dfs.ha.fencing.methods) will be attempted in order until one of the methods succeeds. Only after this process will the second NameNode be transitioned to the Active state. If no fencing method succeeds, the second NameNode will not be transitioned to the Active state, and an error will be returned.

getServiceState

getServiceState - determine whether the given NameNode is Active or Standby

Connect to the provided NameNode to determine its current state, printing either "standby" or "active" to STDOUT as appropriate. This subcommand might be used by cron jobs or monitoring scripts which need to behave differently based on whether the NameNode is currently Active or Standby.

checkHealth

checkHealth - check the health of the given NameNode

Connect to the provided NameNode to check its health. The NameNode is capable of performing some diagnostics on itself, including checking if internal services are running as expected. This command will return 0 if the NameNode is healthy, non-zero otherwise. One might use this command for monitoring purposes.

  Note:

The checkHealth command is not yet implemented, and at present will always return success, unless the given NameNode is completely down.

Using the dfsadmin command when HA is enabled

When you use the dfsadmin command with HA enabled, you should use the -fs option to specify a particular NameNode using the RPC address, or service RPC address, of the NameNode. Not all operations are permitted on a standby NameNode. If the specific NameNode is left unspecified, only the operations to set quotas (-setQuota-clrQuota-setSpaceQuota-clrSpaceQuota), report basic file system information (-report), and check upgrade progress (-upgradeProgress) will failover and perform the requested operation on the active NameNode. The "refresh" options (-refreshNodes-refreshServiceAcl-refreshUserToGroupsMappings, and -refreshSuperUserGroupsConfiguration) must be run on both the active and standby NameNodes.

Disabling HDFS High Availability

If you need to unconfigure HA and revert to using a single NameNode, either permanently or for  upgrade or testing purposes, proceed as follows.
  Important:

If you have been using NFS shared storage in CDH 4, you must unconfigure it before upgrading to CDH 5. Only Quorum-based storage is supported in CDH 5. If you already using Quorum-based storage, you do not need to unconfigure it in order to upgrade.

Step 1: Shut Down the Cluster

  1. Shut down Hadoop services across your entire cluster. Do this from Cloudera Manager; or, if you are not using Cloudera Manager, run the following command on every host in your cluster:
    $ for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
  2. Check each host to make sure that there are no processes running as the hdfsyarnmapred or httpfs users from root:
    # ps -aef | grep java

Step 2: Unconfigure HA

  1. Disable the software configuration.
    • If you are using Quorum-based storage and want to unconfigure it, unconfigure the HA properties described under Configuring Software for HDFS HA.

      If you intend to redeploy HDFS HA later, comment out the HA properties rather than deleting them.

    • If you were using NFS shared storage in CDH 4, you must unconfigure the properties described below before upgrading to CDH 5.
  2. Move the NameNode metadata directories on the standby NameNode. The location of these directories is configured by dfs.namenode.name.dir and/or dfs.namenode.edits.dir. Move them to a backup location.

Step 3: Restart the Cluster

for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x start ; done

Properties to unconfigure to disable an HDFS HA configuration using NFS shared storage

  Important:

HDFS HA with NFS shared storage is not supported in CDH 5. Comment out or delete these properties before attempting to upgrade your cluster to CDH 5. (If you intend to configure HA with Quorum-based storage under CDH 5, you should comment them out rather than deleting them, as they are also used in that configuration.)

Unconfigure the following properties.
  • In your core-site.xml file:

    fs.defaultFS (formerly fs.default.name)

    Optionally, you may have configured the default path for Hadoop clients to use the HA-enabled logical URI. For example, if you used mycluster as the NameService ID as shown below, this will be the value of the authority portion of all of your HDFS paths.

    <property>
      <name>fs.default.name/name>
      <value>hdfs://mycluster</value>
    </property>
  • In your hdfs-site.xml configuration file:

    dfs.nameservices

    <property>
      <name>dfs.nameservices</name>
      <value>mycluster</value>
    </property>
      Note:

    If you are also using HDFS Federation, this configuration setting will include the list of other nameservices, HA or otherwise, as a comma-separated list.

    dfs.ha.namenodes.[nameservice ID]

    A list of comma-separated NameNode IDs used by DataNodes to determine all the NameNodes in the cluster. For example, if you used mycluster as the NameService ID, and you used nn1 and nn2 as the individual IDs of the NameNodes, you would have configured this as follows:

    <property>
      <name>dfs.ha.namenodes.mycluster</name>
      <value>nn1,nn2</value>
    </property>

    dfs.namenode.rpc-address.[nameservice ID]

    For both of the previously-configured NameNode IDs, the full address and RPC port of the NameNode process. For example:

    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn1</name>
      <value>machine1.example.com:8020</value>
    </property>
    <property>
      <name>dfs.namenode.rpc-address.mycluster.nn2</name>
      <value>machine2.example.com:8020</value>
    </property>
      Note:

    You may have similarly configured the servicerpc-address setting.

    dfs.namenode.http-address.[nameservice ID]

    The addresses for both NameNodes' HTTP servers to listen on. For example:

    <property>
      <name>dfs.namenode.http-address.mycluster.nn1</name>
      <value>machine1.example.com:50070</value>
    </property>
    <property>
      <name>dfs.namenode.http-address.mycluster.nn2</name>
      <value>machine2.example.com:50070</value>
    </property>
      Note:

    If you have Hadoop's Kerberos security features enabled, and you use HSFTP, you will have set the https-address similarly for each NameNode.

    dfs.namenode.shared.edits.dir

    The path to the remote shared edits directory which the Standby NameNode uses to stay up-to-date with all the file system changes the Active NameNode makes. You should have configured only one of these directories, mounted read/write on both NameNode machines. The value of this setting should be the absolute path to this directory on the NameNode machines. For example:

    <property>
      <name>dfs.namenode.shared.edits.dir</name>
      <value>file:///mnt/filer1/dfs/ha-name-dir-shared</value>
    </property>

    dfs.client.failover.proxy.provider.[nameservice ID]

    The name of the Java class which the DFS Client uses to determine which NameNode is the current Active, and therefore which NameNode is currently serving client requests. The only implementation which shipped with Hadoop is theConfiguredFailoverProxyProvider. For example:

    <property>
      <name>dfs.client.failover.proxy.provider.mycluster</name>
      <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
    </property>
    dfs.ha.fencing.methods - a list of scripts or Java classes which will be used to fence the Active NameNode during a failover.
      Note:

    If you implemented your own custom fencing method, see the org.apache.hadoop.ha.NodeFencer class.

    • The sshfence fencing method

      sshfence - SSH to the Active NameNode and kill the process

      For example:

      <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence</value>
      </property>
      
      <property>
        <name>dfs.ha.fencing.ssh.private-key-files</name>
        <value>/home/exampleuser/.ssh/id_rsa</value>
      </property>
      Optionally, you may have configured a non-standard username or port to perform the SSH, as shown below, and also a timeout, in milliseconds, for the SSH:
      <property>
        <name>dfs.ha.fencing.methods</name>
        <value>sshfence([[username][:port]])</value>
      </property>
      <property>
        <name>dfs.ha.fencing.ssh.connect-timeout</name>
        <value>30000</value>
        <description>
          SSH connection timeout, in milliseconds, to use with the builtin
          sshfence fencer.
        </description>
      </property>
    • The shell fencing method

      shell - run an arbitrary shell command to fence the Active NameNode

      The shell fencing method runs an arbitrary shell command, which you may have configured as shown below:
      <property>
        <name>dfs.ha.fencing.methods</name>
        <value>shell(/path/to/my/script.sh arg1 arg2 ...)</value>
      </property>
Automatic failover: If you configured automatic failover, you configured two additional configuration parameters.
  • In your hdfs-site.xml:
    <property>
      <name>dfs.ha.automatic-failover.enabled</name>
      <value>true</value>
    </property>
  • In your core-site.xml file, add:

    <property>
      <name>ha.zookeeper.quorum</name>
      <value>zk1.example.com:2181,zk2.example.com:2181,zk3.example.com:2181</value>
    </property>

Other properties: There are several other configuration parameters which you may have set to control the behavior of automatic failover, though they were not necessary for most installations. See the configuration section of the Hadoop documentation for details.

Redeploying HDFS High Availability

If you need to redeploy HA using Quorum-based storage after temporarily disabling it, proceed as follows:

  1. Shut down the cluster as described in Step 1 of the previous section.
  2. Uncomment the properties you commented out in Step 2 of the previous section.
  3. Deploy HDFS HA, following the instructions under Deploying HDFS High Availability.

http://www.cloudera.com/content/cloudera-content/cloudera-docs/CDH5/latest/CDH5-High-Availability-Guide/cdh5hag_hdfs_ha_admin.html

 

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
系统根据B/S,即所谓的电脑浏览器/网络服务器方式,运用Java技术性,挑选MySQL作为后台系统。系统主要包含对客服聊天管理、字典表管理、公告信息管理、金融工具管理、金融工具收藏管理、金融工具银行卡管理、借款管理、理财产品管理、理财产品收藏管理、理财产品银行卡管理、理财银行卡信息管理、银行卡管理、存款管理、银行卡记录管理、取款管理、转账管理、用户管理、员工管理等功能模块。 文中重点介绍了银行管理的专业技术发展背景和发展状况,随后遵照软件传统式研发流程,最先挑选适用思维和语言软件开发平台,依据需求分析报告模块和设计数据库结构,再根据系统功能模块的设计制作系统功能模块图、流程表和E-R图。随后设计架构以及编写代码,并实现系统能模块。最终基本完成系统检测和功能测试。结果显示,该系统能够实现所需要的作用,工作状态没有明显缺陷。 系统登录功能是程序必不可少的功能,在登录页面必填的数据有两项,一项就是账号,另一项数据就是密码,当管理员正确填写并提交这二者数据之后,管理员就可以进入系统后台功能操作区。进入银行卡列表,管理员可以进行查看列表、模糊搜索以及相关维护等操作。用户进入系统可以查看公告和模糊搜索公告信息、也可以进行公告维护操作。理财产品管理页面,管理员可以进行查看列表、模糊搜索以及相关维护等操作。产品类型管理页面,此页面提供给管理员的功能有:新增产品类型,修改产品类型,删除产品类型。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值