2013-07-23 12:39:32,982 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.IOException: NameNode is not formatted.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:212)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:610)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:591)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1162)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1226)
2013-07-23 12:39:32,983 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-07-23 12:39:32,985 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
2013-07-23 12:43:08,151 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.io.FileNotFoundException: /hadoop/dfs/name/current/VERSION (Permission denied)
at java.io.RandomAccessFile.open(Native Method)
at java.io.RandomAccessFile.<init>(RandomAccessFile.java:216)
at org.apache.hadoop.hdfs.server.common.Storage.readPropertiesFile(Storage.java:966)
at org.apache.hadoop.hdfs.server.common.Storage.readProperties(Storage.java:915)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:308)
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:202)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:639)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:476)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:400)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:434)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:610)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:591)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1162)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1226)
2013-07-23 12:43:08,152 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
2013-07-23 12:43:08,154 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at localhost.localdomain/127.0.0.1
************************************************************/
最后查看文件属于root 用户的,还是 操作时使用用户不对造成的
sudo -u hdfs hadoop namenode -format
rm -rf /hadoop/dfs/name/current/
格式化后 出现另一个错误,namenode 还是没起来,参考http://blog.csdn.net/wh62592855/article/details/5752199
2013-07-23 13:33:42,313 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-548518983-127.0.0.1-1374557478279 (storage id DS-339509952-127.0.0.1-50010-1374555760144) service to localhost.localdomain/127.0.0.1:8020
java.io.IOException: Incompatible clusterIDs in /hadoop/dfs/data: namenode clusterID = CID-228ef924-c2fb-4a1f-9251-7f6c31a2afef; datanode clusterID = CID-9b419458-8e5f-4bab-b60c-cf2d221dc84e
at org.apache.hadoop.hdfs.server.datanode.DataStorage.doTransition(DataStorage.java:391)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:191)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:911)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:882)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:308)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:218)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:660)
at java.lang.Thread.run(Thread.java:662)
2013-07-23 13:33:42,315 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool BP-548518983-127.0.0.1-1374557478279 (storage id DS-339509952-127.0.0.1-50010-1374555760144) service to localhost.localdomain/127.0.0.1:8020
2013-07-23 13:33:42,415 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool BP-548518983-127.0.0.1-1374557478279 (storage id DS-339509952-127.0.0.1-50010-1374555760144)
2013-07-23 13:33:44,416 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Exiting Datanode
2013-07-23 13:33:44,418 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 0
2013-07-23 13:33:44,420 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down DataNode at localhost.localdomain/127.0.0.1
[root@localhost dfs]# vi data/current/VERSION
[root@localhost dfs]# pwd
/hadoop/dfs
namenode的版本号:
${dfs.....dir}/name/current/VERSION:
#Tue Jul 23 13:31:18 HKT 2013
namespaceID=246014553
clusterID=CID-228ef924-c2fb-4a1f-9251-7f6c31a2afef
cTime=0
storageType=NAME_NODE
blockpoolID=BP-548518983-127.0.0.1-1374557478279
layoutVersion=-40
datanode的集群版本号:
${dfs.....dir}/data/current/VERSION:
#Tue Jul 23 09:32:10 HKT 2013
storageID=DS-339509952-127.0.0.1-50010-1374555760144
clusterID=CID-9b419458-8e5f-4bab-b60c-cf2d221dc84e
cTime=0
storageType=DATA_NODE
layoutVersion=-40
[root@localhost hadoop]# rm -rf /hadoop/dfs/data/current/
[root@localhost hadoop]# sudo -u hdfs hadoop namenode -format
hive 也起不来:
org.apache.hive.service.ServiceException: Unable to connect to MetaStore!
at org.apache.hive.service.cli.CLIService.start(CLIService.java:87)
at org.apache.hive.service.CompositeService.start(CompositeService.java:70)
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:61)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: MetaException(message:Could not connect to meta store using any of the URIs provided. Most recent failure: org.apache.thrift.transport.TTransportException: java.net.ConnectException: Connection refused
at org.apache.thrift.transport.TSocket.open(TSocket.java:185)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.open(HiveMetaStoreClient.java:277)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:163)
at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.<init>(HiveMetaStoreClient.java:103)
at org.apache.hive.service.cli.CLIService.start(CLIService.java:84)
at org.apache.hive.service.CompositeService.start(CompositeService.java:70)
at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:61)
at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:87)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
at java.lang.reflect.Method.invoke(Method.java:597)
at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
Caused by: java.net.ConnectException: Connection refused
1、postgresql 配置文件配置错误,造成上面这个错误,连接异常
直接登陆都不行
postgres=# \q
[root@localhost all-in-one]# /usr/bin/psql -U hive -d metastore
psql: FATAL: Peer authentication failed for user "hive"
2、数据库 metastore 中的表的 owner 不是 hive ,结果也是hive 起不来
使用 postgres 用户删掉 metastore 数据库 ,重建,然后使用 hive 用户进入 metastore 数据库导入sql脚本
原来可能导入sql 文件时使用的用户不对造成的。
root@localhost all-in-one]# whereis psql
psql: /usr/bin/psql /usr/share/man/man1/psql.1.gz
[root@localhost all-in-one]# /usr/bin/psql -U hive
psql: FATAL: database "hive" does not exist
[root@localhost all-in-one]# sudo -u postgres /usr/bin/psql
could not change directory to "/home/yuming/0723/all-in-one"
psql (9.2.4)
Type "help" for help.
postgres=# CREATE DATABASE metastore owner=hive
postgres-# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
postgres | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 |
template0 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.UTF-8 | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
重新创建数据库:
postgres-# CREATE DATABASE metastore owner=hive;
ERROR: syntax error at or near "CREATE"
LINE 2: CREATE DATABASE metastore owner=hive;
^
postgres=# create database metastore owner=hive;
CREATE DATABASE
postgres=#
HiveServer2启动错误:
1009 2013-08-11 19:27:41,130 FATAL server.HiveServer2 (HiveServer2.java:main(89)) - Error starting HiveServer2
1010 org.apache.hive.service.ServiceException: Failed to Start HiveServer2
1011 at org.apache.hive.service.CompositeService.start(CompositeService.java:80)
1012 at org.apache.hive.service.server.HiveServer2.start(HiveServer2.java:61)
1013 at org.apache.hive.service.server.HiveServer2.main(HiveServer2.java:87)
1014 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
1015 at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
1016 at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
1017 at java.lang.reflect.Method.invoke(Method.java:606)
1018 at org.apache.hadoop.util.RunJar.main(RunJar.java:208)
1019 Caused by: org.apache.hive.service.ServiceException: Unable to connect to MetaStore!
1020 at org.apache.hive.service.cli.CLIService.start(CLIService.java:87)
1021 at org.apache.hive.service.CompositeService.start(CompositeService.java:70)
1022 ... 7 more
1023 Caused by: MetaException(message:Got exception: org.apache.hadoop.hive.metastore.api.MetaException javax.jdo.JDODataStoreException: Error executi ng JDOQL query "SELECT "THIS"."NAME" AS NUCORDER0 FROM "DBS" "THIS" WHERE (LOWER("THIS"."NAME") LIKE ? ESCAPE '\\' ) ORDER BY NUCORDER0 " : ERROR : invalid escape string
1024 Hint: Escape string must be empty or one character..
1025 NestedThrowables:
1026 org.postgresql.util.PSQLException: ERROR: invalid escape string
1027 Hint: Escape string must be empty or one character.)
1028 at org.apache.hadoop.hive.metastore.MetaStoreUtils.logAndThrowMetaException(MetaStoreUtils.java:850)
1029 at org.apache.hadoop.hive.metastore.HiveMetaStoreClient.getDatabases(HiveMetaS
修改文件 /var/lib/pgsql/data/postgresql.conf
#escape_string_warning = on
#lo_compat_privileges = off
#quote_all_identifiers = off
#sql_inheritance = on
#standard_conforming_strings = on
关掉:
standard_conforming_strings = off
重启
[root@indigo data]# su postgres
bash-4.2$ /usr/bin/pg_ctl restart -w -m fast -D /var/lib/pgsql/data
waiting for server to shut down.... done
server stopped
waiting for server to start.... done
server started
bash-4.2$ exit
exit
[root@indigo data]# /etc/init.d/hive-server2 start
Starting Hive Server2 (hive-server2): [ OK ]
[root@indigo data]# jps
13011 ResourceManager
13548 JobHistoryServer
962 SecondaryNameNode
13654 RunJar
14132 RunJar
12669 NameNode
12846 DataNode
13364 NodeManager
14216 Jps