上一篇完成了HIVE的元数据管理,atlas还能够支持SQOOP, HBASE, KAFKA, STORM的元数据,但是按照官方文档配置之后发现SQOOP的元数据并没有任何显示,也没有任何报错。
SQOOP日志如图,最后一红圈就是问题所在:
从上图可以看到,SQOOP HOOK是起作用了的,但是数据没见进atlas,也没有任何报错,百度,GOOGLE查询类似的信息,发现论坛只有几篇文章提到这个问题,而且好像也没有解决,应该用的人少。无奈只能打开源码,反复检查代码 ,最后定位到 一个类: KafkaNotification, JAR名称是: atlas-notification-1.1.0.jar .
找到红圈日志的地方,日志很明显打印了 2行,一去一回,为什么SQOOP日志只显示了第一个日志? 这是我最开始疑惑的地方。根据经验 ,一定是有异常,导致代码根本没有往下走。
private synchronized void createProducer() {
LOG.info("==> KafkaNotification.createProducer()" );
if (producer == null) {
producer = new KafkaProducer(properties);
}
LOG.info("<== KafkaNotification.createProducer()");
}
为了确认这个地方是问题所在, 我把代码添加了异常处理,气的我直接LOG打印FUCK ATLAS, 浪费了我一天时间,然后重新打一个JAR包替换原来的包 :
private synchronized void createProducer() {
LOG.info("==> KafkaNotification.createProducer()" );
try{
if (producer == null) {
LOG.info("FUCK ATLAS ");
producer = new KafkaProducer(properties);
LOG.info("FUCK ATLAS");
}
}catch(Exception ex){
try {
throw new Exception(ex);
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
//ex.printStackTrace();
}
LOG.info("<== KafkaNotification.createProducer()");
}
然后测试,得到的异常信息如下:
java.lang.Exception: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:292)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:198)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:194)
at org.apache.atlas.hook.AtlasHook$2.run(AtlasHook.java:161)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:433)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:287)
... 9 more
Caused by: java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:162)
at org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:82)
at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:535)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:491)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:475)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:460)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:154)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:337)
... 11 more
Logging initialized using configuration in jar:file:/opt/cloudera/parcels/CDH-5.10.2-1.cdh5.10.2.p0.5/jars/hive-common-1.1.0-cdh5.10.2.jar!/hive-log4j.properties
java.lang.Exception: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:292)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:198)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:194)
at org.apache.atlas.hook.AtlasHook$2.run(AtlasHook.java:161)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:433)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:287)
... 9 more
Caused by: java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:162)
at org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:82)
at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:535)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:491)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:475)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:460)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:154)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:337)
... 11 more
java.lang.Exception: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:292)
at org.apache.atlas.kafka.KafkaNotification.sendInternal(KafkaNotification.java:198)
at org.apache.atlas.notification.AbstractNotification.send(AbstractNotification.java:89)
at org.apache.atlas.hook.AtlasHook.notifyEntitiesInternal(AtlasHook.java:194)
at org.apache.atlas.hook.AtlasHook$2.run(AtlasHook.java:161)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.kafka.common.KafkaException: Failed to construct kafka producer
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:433)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:291)
at org.apache.atlas.kafka.KafkaNotification.createProducer(KafkaNotification.java:287)
... 9 more
Caused by: java.security.AccessControlException: access denied ("javax.management.MBeanTrustPermission" "register")
at java.security.AccessControlContext.checkPermission(AccessControlContext.java:472)
at java.lang.SecurityManager.checkPermission(SecurityManager.java:585)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.checkMBeanTrustPermission(DefaultMBeanServerInterceptor.java:1848)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerMBean(DefaultMBeanServerInterceptor.java:322)
at com.sun.jmx.mbeanserver.JmxMBeanServer.registerMBean(JmxMBeanServer.java:522)
at org.apache.kafka.common.metrics.JmxReporter.reregister(JmxReporter.java:162)
at org.apache.kafka.common.metrics.JmxReporter.metricChange(JmxReporter.java:82)
at org.apache.kafka.common.metrics.Metrics.registerMetric(Metrics.java:535)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:491)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:475)
at org.apache.kafka.common.metrics.Metrics.addMetric(Metrics.java:460)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:154)
at org.apache.kafka.common.metrics.Metrics.<init>(Metrics.java:120)
at org.apache.kafka.clients.producer.KafkaProducer.<init>(KafkaProducer.java:337)
... 11 more
OK
Time taken: 2.611 seconds
Loading data to table default.jlwang5
Table default.jlwang5 stats: [numFiles=4, numRows=0, totalSize=37888, rawDataSize=0]
查询网上,说是因为java的policy文件导致,需要添加权限,说实话,我也懒的 去了解为啥,就按别人说的做吧 。
/usr/java/jdk1.8.0_111/jre/lib/security/java.policy
在grant标签最后一行添加:
permission java.security.AllPermission;
再测试 ,atlas SQOOP HOOK正常了。