今天在Mac上使用brew安装Hadoop之后启动yarn出现了localhost: ERROR: Cannot set priority of nodemanager process 29577问题
./start-yarn.sh
Starting resourcemanager
Starting nodemanagers
localhost: ERROR: Cannot set priority of nodemanager process 29577
我们cd到logs日志查看详细问题:
cd /opt/homebrew/Cellar/hadoop/3.4.0/libexec/logs
然后我们查看nodemanager的那个以.log结尾的日志
然后我们找到有问题的的那段日志,这里就是我的有问题那段日志
2024-04-07 10:30:37,040 ERROR org.apache.hadoop.yarn.server.nodemanager.NodeManager: Error starting NodeManager
java.lang.ExceptionInInitializerError
at com.google.inject.internal.cglib.reflect.$FastClassEmitter.<init>(FastClassEmitter.java:67)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.generateClass(FastClass.java:72)
at com.google.inject.internal.cglib.core.$DefaultGeneratorStrategy.generate(DefaultGeneratorStrategy.java:25)
at com.google.inject.internal.cglib.core.$AbstractClassGenerator.create(AbstractClassGenerator.java:216)
at com.google.inject.internal.cglib.reflect.$FastClass$Generator.create(FastClass.java:64)
at com.google.inject.internal.BytecodeGen.newFastClass(BytecodeGen.java:204)
at com.google.inject.internal.ProviderMethod$FastClassProviderMethod.<init>(ProviderMethod.java:256)
at com.google.inject.internal.ProviderMethod.create(ProviderMethod.java:71)
at com.google.inject.internal.ProviderMethodsModule.createProviderMethod(ProviderMethodsModule.java:275)
at com.google.inject.internal.ProviderMethodsModule.getProviderMethods(ProviderMethodsModule.java:144)
at com.google.inject.internal.ProviderMethodsModule.configure(ProviderMethodsModule.java:123)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:349)
at com.google.inject.AbstractModule.install(AbstractModule.java:122)
at com.google.inject.servlet.ServletModule.configure(ServletModule.java:49)
at com.google.inject.AbstractModule.configure(AbstractModule.java:62)
at com.google.inject.spi.Elements$RecordingBinder.install(Elements.java:340)
at com.google.inject.spi.Elements.getElements(Elements.java:110)
at com.google.inject.internal.InjectorShell$Builder.build(InjectorShell.java:138)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:104)
at com.google.inject.Guice.createInjector(Guice.java:96)
at com.google.inject.Guice.createInjector(Guice.java:73)
at com.google.inject.Guice.createInjector(Guice.java:62)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:420)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:468)
at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:464)
at org.apache.hadoop.yarn.server.nodemanager.webapp.WebServer.serviceStart(WebServer.java:125)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:195)
at org.apache.hadoop.service.CompositeService.serviceStart(CompositeService.java:123)
at org.apache.hadoop.service.AbstractService.start(AbstractService.java:195)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.initAndStartNodeManager(NodeManager.java:970)
at org.apache.hadoop.yarn.server.nodemanager.NodeManager.main(NodeManager.java:1058)
Caused by: java.lang.reflect.InaccessibleObjectException: Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @56ed9f5e
at java.base/java.lang.reflect.AccessibleObject.throwInaccessibleObjectException(AccessibleObject.java:388)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:364)
at java.base/java.lang.reflect.AccessibleObject.checkCanSetAccessible(AccessibleObject.java:312)
at java.base/java.lang.reflect.Method.checkCanSetAccessible(Method.java:203)
at java.base/java.lang.reflect.Method.setAccessible(Method.java:197)
at com.google.inject.internal.cglib.core.$ReflectUtils$2.run(ReflectUtils.java:56)
at java.base/java.security.AccessController.doPrivileged(AccessController.java:319)
at com.google.inject.internal.cglib.core.$ReflectUtils.<clinit>(ReflectUtils.java:46)
... 32 more
2024-04-07 10:30:37,041 INFO org.apache.hadoop.ipc.Server: Stopping server on 58649
日志显示了启动NodeManager时的错误信息,主要异常是
java.lang.ExceptionInInitializerError
这个异常的根本原因是
java.lang.reflect.InaccessibleObjectException
根据日志,这个异常出现在尝试通过反射操作访问
java.lang.ClassLoader.defineClass()
上面这个方法的时候,出现了该方法不可访访问的问题。具体错误信息是:
Unable to make protected final java.lang.Class java.lang.ClassLoader.defineClass(java.lang.String,byte[],int,int,java.security.ProtectionDomain) throws java.lang.ClassFormatError accessible: module java.base does not "opens java.lang" to unnamed module @56ed9f5e
查询相关资料后发现,是因为Java 9及更高版本引入了模块系统,对反射访问进行了限制。在这种情况下,可能是因为Hadoop或其依赖项中的代码尝试在未打开 java.lang
模块的情况下访问 ClassLoader.defineClass()
方法。其实我觉得修改java文件,应该也可以但是怕出问题。
(问题原因是,我猜测是我在使用brew安装Hadoop的时候,brew自己又给我下了一个jdk而且版本应该是在9版本之上,后面我在homebrew目录下确实是发现了一个jdk11)
注意:“Apache Hadoop 3.3 及更高版本支持 Java 8 和 Java 11(仅限运行时)请使用 Java 8 编译 Hadoop。不支持使用 Java 11 编译 Hadoop”,
那么现在我们就给Hadoop指定一个1.8版本就可以了,我们切换到
/opt/homebrew/Cellar/hadoop/3.4.0/libexec/etc/hadoop
目录下,修改
hadoop-env.sh
这个文件,用/查找JAVA_HOME(用n查找下一个)找到以后,上面的注释提醒我们,在大多数平台上,JAVA_HOME
是必须的,除了OS X(也就是macOS)之外。那我们就还是指定一下也没事,我们把export前面的注释取消了,然后输入自己的jdk路径
# The java implementation to use. By default, this environment
# variable is REQUIRED on ALL platforms except OS X!
export JAVA_HOME=/Library/Java/JavaVirtualMachines/jdk-1.8.jdk/Contents/Home
:wq保存退出以后在source一下
再去sbin目录下执行./start-yarn.sh或者./start-all.sh然后就不会有问题了,这里执行的是./start-all.sh
WARNING: Attempting to start all Apache Hadoop daemons as yangshuhao in 10 seconds.
WARNING: This is not a recommended production deployment configuration.
WARNING: Use CTRL-C to abort.
Starting namenodes on [localhost]
Starting datanodes
Starting secondary namenodes [yangshuhaodebijibendiannao.local]
2024-04-07 11:42:20,539 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting resourcemanager
Starting nodemanagers
jps一下 ,守护进程都有了
40608 ResourceManager
40273 DataNode
40710 NodeManager
40169 NameNode
40778 Jps
40414 SecondaryNameNode
在提醒一下当时,我出问题之后,就以为自己前面是不是有问题就重新配了一次,然后启动Hadoop的时候datanode就掉了,是因为格式化namenode太多次了,到
/opt/homebrew/Cellar/hadoop/3.4.0/libexec/tmp/dfs
这个目录下有三个文件cd到name/current里,查看name的clusterID,就把与clusterID与另一个data/current里面的clusterID比较,如果不一样就把data的clusterID用vim改成name里的就可以了,然后再重启集群