MapReduce在Zookeeper集群上鉴权失败
这是zookeeper上的一封邮件问答。内容比较简单,请大家直接查看正文。
我现在使用配置了Kafka的zookeeper集群,这个Kafka没有任何SASL安全配置。另外我还有一个hadoop集群,这个集群使用了另一个配置了安全的zookeeper集群。这样的话,我就是有两个zookeeper集群:一个带安全,另一个不带安全。
现在当我运行mapreduce程序,从没有安全的zookeeper上的Kafka上抓取数据时,就得到下面的关于SASL的错误,尽管读取zookeeper集群上没有配置安全选项。
: org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers/myGroup/ids/myGroup_server-1445242267846-420b8826
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
at org.I0Itec.zkclient.ZkClient.createEphemeral(ZkClient.java:328)
at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:222)
at kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:237)
at kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:275)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:254)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:239)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:153)
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:111)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:125)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:109)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:308)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:300)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers/myGroup/ids/myGroup_server-1445242267846-420b8826
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 21 more
在执行mapreduce程序时,我已经试过设置这个参数:-Djava.security.auth.login.config=none。
有什么办法可以避开这些安全告警吗?
Regards,Sagar
你是否设置了jaas配置,让客户端认为你想要鉴权呢?你看也会在Kafka问题列表上问同样的问题。
Flavio
感谢及时回复。因为我是在hadoop节点上执行的MapReduce,因此我感觉应该使用默认的jaas文件,否则MapReduce授权可能会崩溃…对吗?
异常问题出现在zookeeper侧。
我试过无安全的hadoop集群,会正常执行。
Regards,Sagar
就像我说的如果zk客户端找到了系统属性配置,它就会认为你想要鉴权。你也可以试试***设置zookeeper.sasl.client为false***。
执行MapReduce job时设置那个参数执行成功了,没有任何问题。感谢你的及时回复:)
下面是邮件原文。
Hi All,
I have a zookeeper cluster configured with Kafka without any SASL security configuration. Also I have a hadoop cluster configured with security which uses a different zookeeper cluster. So overall, I have two zookeeper clusters - one with security and one without security.
Now when I try to run a mapreduce program to fetch data from Kafka using non-secure zookeeper, I get following error message of SASL though my read zookeeper cluster does not have security configured.
: org.I0Itec.zkclient.exception.ZkException: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers/myGroup/ids/myGroup_server-1445242267846-420b8826
at org.I0Itec.zkclient.exception.ZkException.create(ZkException.java:68)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:685)
at org.I0Itec.zkclient.ZkClient.create(ZkClient.java:304)
at org.I0Itec.zkclient.ZkClient.createEphemeral(ZkClient.java:328)
at kafka.utils.ZkUtils$.createEphemeralPath(ZkUtils.scala:222)
at kafka.utils.ZkUtils$.createEphemeralPathExpectConflict(ZkUtils.scala:237)
at kafka.utils.ZkUtils$.createEphemeralPathExpectConflictHandleZKBug(ZkUtils.scala:275)
at kafka.consumer.ZookeeperConsumerConnector.kafka$consumer$ZookeeperConsumerConnector$$registerConsumerInZK(ZookeeperConsumerConnector.scala:254)
at kafka.consumer.ZookeeperConsumerConnector.consume(ZookeeperConsumerConnector.scala:239)
at kafka.consumer.ZookeeperConsumerConnector.createMessageStreams(ZookeeperConsumerConnector.scala:153)
at org.apache.spark.streaming.kafka.KafkaReceiver.onStart(KafkaInputDStream.scala:111)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.startReceiver(ReceiverSupervisor.scala:125)
at org.apache.spark.streaming.receiver.ReceiverSupervisor.start(ReceiverSupervisor.scala:109)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:308)
at org.apache.spark.streaming.scheduler.ReceiverTracker$ReceiverLauncher$$anonfun$8.apply(ReceiverTracker.scala:300)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:1767)
at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:63)
at org.apache.spark.scheduler.Task.run(Task.scala:70)
at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:213)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
Caused by: org.apache.zookeeper.KeeperException$AuthFailedException: KeeperErrorCode = AuthFailed for /consumers/myGroup/ids/myGroup_server-1445242267846-420b8826
at org.apache.zookeeper.KeeperException.create(KeeperException.java:123)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783)
at org.I0Itec.zkclient.ZkConnection.create(ZkConnection.java:87)
at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:308)
at org.I0Itec.zkclient.ZkClient$1.call(ZkClient.java:304)
at org.I0Itec.zkclient.ZkClient.retryUntilConnected(ZkClient.java:675)
... 21 more
I did try setting up parameter -Djava.security.auth.login.config=none while trying to run the mapreduce program.
Any thoughts on how I can get rid of this security warning?
Regards,Sagar
Do you have the jaas config property set which is causing your client to believe you want to authenticate? You may also want to ask that same question on the Kafka list.
Flavio
Thanks for the prompt response. Since i am running MapReduce on hadoop node, i feel i should use default jaas file, else my MapReduce auth might break … right?
Exception indicates issue at zookeeper side.
I tried with non-secure hadoop cluster which works as expected.
Regards,Sagar
All I’m saying is that if the zk client finds the system property set, it will think that you want to authenticate. You could also try setting zookeeper.sasl.client to false.
Setting up that parameter while executing MapReduce job helped get it working without any issues. Thanks for your prompt responses ?