Spark Scala API读Hbase(带Kerberos安全认证)
各个组件版本
<!-- Languages -->
<java.version>1.8</java.version>
<scala.version>2.11.8</scala.version>
<scala.binary.version>2.11</scala.binary.version>
<!-- Apache Spark -->
<spark.version>2.1.0</spark.version>
<hadoop.version>3.0.0-cdh6.1.1</hadoop.version>
<!-- Thirty Party -->
<scopt.version>3.3.0</scopt.version>
<typesafe-config.version>1.3.0</typesafe-config.version>
<hbase.version>2.1.0-cdh6.1.1</hbase.version>
创建项目之后的准备工作
1、在集群下载对应的****-site.xml文件
这里就不操作,都在/etc下,你可以对照着下图进行检索
如:find / -name core-site.xml
2、生成指定的认证用户****.keytab文件
kadmin.local
ktadd -k /opt/hbase.keytab username/_HOST@XXXX.COM
3、在服务器下载krb5.conf文件
![在这里插入图片描述](https://i-blog.csdnimg.cn/blog_migrate/3d539518e8e2feee1ec99dca9ff4b02e.png)
Spark Scala API
package compute
import java.io.IOException
import org.apache.hadoop.conf.Configuration
import org.apache.hadoop.hbase._
import org.apache.hadoop.hbase.client._
import org.apache.hadoop.hbase.util.Bytes
import org.apache.hadoop.security.UserGroupInformation
object LaboratoryBiasCV {
val krbConf = "..\\krb5.conf" //注意:路径要写对,我这上传粘贴时做了修改
val krbkeytab = "..\\hbase.keytab" //注意:路径要写对,我这上传粘贴时做了修改
val krbPrincipal = "hbase@XXXX.COM"
val zookeeperClient = "node01_ip,node02_ip,node03_ip" // zookeeper 节点IP
def main(args: Array[String]): Unit = {
System.setProperty("java.security.krb5.conf", krbConf)
UserGroupInf