本地 hbase 集群配置 Azure Blob Storage

简述:


  • hadoop-azure 提供hadoop 与 azure blob storage 集成支持,需要部署 hadoop-azure.jar 程序包,在HDP2.4 安装包中已默认提供,如下图:
  • 配置成功后,读写的数据都存储在 Azure Blob Storage account
  • 支持配置多个 Azure Blob Storage account, 实现了标准的 Hadoop FileSystem interface
  • Reference file system paths using URLs using the wasb scheme.
  • Tested on both Linux and Windows. Tested at scale.
  • Azure Blob Storage 包含三部分内容:
    1. Storage Account: All access is done through a storage account
    2. Container: A container is a grouping of multiple blobs. A storage account may have multiple containers. In Hadoop, an entire file system hierarchy is stored in a single container. It is also possible to configure multiple containers, effectively presenting multiple file systems that can be referenced using distinct URLs.
    3. Blob: A file of any type and size. In Hadoop, files are stored in blobs. The internal implementation also uses blobs to persist the file system hierarchy and other metadata

配置 :


  • 在 china Azure  门户(https://manage.windowsazure.cn) 创建一个 blob storage Account, 如下图命名:localhbase
  • 配置访问 Azure blob storage 访问证书及key以及切换文件系统配置,本地 hadoop  core-site.xml 文件,内容如下 

    复制代码

    <property>
      <name>fs.defaultFS</name>
      <value>wasb://localhbase@localhbase.blob.core.chinacloudapi.cn</value>
    </property>
    <property>
      <name>fs.azure.account.key.localhbase.blob.core.chinacloudapi.cn</name>
      <value>YOUR ACCESS KEY</value>
    </property>

    复制代码

  • 在大多数场景下Hadoop clusters, the core-site.xml file is world-readable,为了安全起见,可通过配置将Key加密,然后通过配置的程序对key进行解密,此场景下的配置如下(基于安全考虑的可选配置):

    复制代码

    <property>
      <name>fs.azure.account.keyprovider.localhbase.blob.core.chinacloudapi.cn</name>
      <value>org.apache.hadoop.fs.azure.ShellDecryptionKeyProvider</value>
    </property>
    <property>
      <name>fs.azure.account.key.localhbase.blob.core.chinacloudapi.cn</name>
      <value>YOUR ENCRYPTED ACCESS KEY</value>
    </property>
    <property>
      <name>fs.azure.shellkeyprovider.script</name>
      <value>PATH TO DECRYPTION PROGRAM</value>
    </property>

    复制代码

  • Azure Blob Storage interface for Hadoop supports two kinds of blobs, block blobs and page blobs;Block blobs are the default kind of blob and are good for most big-data use cases, like input data for Hive, Pig, analytical map-reduce jobs etc

  • Page blob handling in hadoop-azure was introduced to support HBase log files. Page blobs can be written any number of times, whereas block blobs can only be appended to50,000 times before you run out of blocks and your writes will fail,That won’t work for HBase logs, so page blob support was introduced to overcome this limitation

  •  Page blobs can be up to 1TB in size, larger than the maximum 200GB size for block blobs

  • In order to have the files you create be page blobs, you must set the configuration variable fs.azure.page.blob.dir to a comma-separated list of folder names

    <property>
       <name>fs.azure.page.blob.dir</name>
       <value>/hbase/WALs,/hbase/oldWALs,/mapreducestaging,/hbase/MasterProcWALs,/atshistory,/tezstaging,/ams/hbase</value>
    </property>
  •  

验证: 


FAQ


  • ambari collector不要与regionserver一台机器
  • 配置ha一定要在更改数据目录到wasb之前
  • hadoop core-site.xml增加以下配置,否则mapreduce2组件会起不来,(注意impl为小写)
    <property>         
      <name>fs.AbstractFileSystem.wasb.impl</name>                           
      <value>org.apache.hadoop.fs.azure.Wasb</value> 
    </property>
  • 本地自建集群,配置HA,修改集群的FS为 wasb, 然后将原hbase集群物理文件目录copy至新建的blob storage, 此时,在使用phoenix插入带有索引的表数据时出错,修改hbase-site.xml配置如下:

    <property>         
      <name>hbase.regionserver.wal.codec</name>                           
      <value>org.apache.hadoop.hbase.regionserver.wal.IndexedWALEditCodec</value> 
    </property>
  •  

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值