最近遇到一个华为云Hbase的问题,在网上没有找到解决的方法,所以在这记录一下。
问题描述:
storm消费kafka数据写入Hbase。storm与kafka都是自己安装的,Hbase是华为云的。
如果按照正常的storm消费kafka然后直接存入hbase会出现zookeeper链接问题,因为这边用了两个zookeeper。
主要是在里面加上了init();和login();这两个方法
public static void init(){
Configuration conf = new Configuration();
String path =AttributeCommon.hbase_path;
String conFilesPath =path + "conf" + File.separator;
conf.addResource(new Path(conFilesPath + "core-site.xml"));
conf.addResource(new Path(conFilesPath + "hdfs-site.xml"));
conf.addResource(new Path(conFilesPath + "hbase-site.xml"));
}
public static void login(){
try {
String path =AttributeCommon.hbase_path;
String auth = path+ "auth" + File.separator;
String userPrincipal = "dinfo";
String userKeytabPath = auth+"user.keytab";
String krb5ConfPath = auth+"krb5.conf";
String ZKServerPrincipal = "zookeeper/hadoop.hadoop.com";
String ZOOKEEPER_DEFAULT_LOGIN_CONTEXT_NAME = "Client";
String ZOOKEEPER_SERVER_PRINCIPAL_KEY = "zookeeper.server.principal";
Configuration hadoopConf = new Configuration();
LoginUtil.setJaasConf(ZOOKEEPER_DEFAULT_LOGIN_CONTEXT_NAME, userPrincipal, userKeytabPath);
LoginUtil.setZookeeperServerPrincipal(ZOOKEEPER_SERVER_PRINCIPAL_KEY, ZKServerPrincipal);
LoginUtil.login(userPrincipal, userKeytabPath, krb5ConfPath, hadoopConf);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
LoginUtil这个类在华为云的hbase的demo程序里有。