Hive升级后SparkSession无法使用

hive从2.10升级到3.10后报错

query = Cannot find local variable 'query'
ne = {NucleusException@19314} "org.datanucleus.exceptions.NucleusException: Cannot add `SERDES`.`SERDE_ID` as referenced FK column for `CDS`"
this = {JDOQLQuery@19250} "SELECT FROM org.apache.hadoop.hive.metastore.model.MTableColumnStatistics WHERE dbName == ''"
parameters = {HashMap@19261}  size = 0
candidateCmd = {ClassMetaData@19262} "<class name="MTableColumnStatistics"\n       identity-type="DATASTORE"\n       persistence-modifier="persistence-capable"\n       table="TAB_COL_STATS"\n       detachable="true"\n>\n<datastore-identity strategy="native">\n<column name="CS_ID"/>\n</datastore-identity>\n<inheritance strategy="new-table">\n</inheritance>\n<field name="avgColLen"\n       persistence-modifier="PERSISTENT"\n       null-value="NONE"\n       default-fetch-group="true"\n       embedded="true"\n       unique="false">\n<column name="AVG_COL_LEN" jdbc-type="DOUBLE" allows-null="true"/>\n</field>\n<field name="colName"\n       persistence-modifier="PERSISTENT"\n       null-value="NONE"\n       default-fetch-group="true"\n       embedded="true"\n       unique="false">\n<column name="COLUMN_NAME" jdbc-type="VARCHAR" allows-null="false" length="128"/>\n</field>\n<field name="colType"\n       persistence-modifier="PERSISTENT"\n       null-value="NONE"\n       default-fetch-group="true"\n       embedded="true"\n       unique="false">\n<column name="COL"
startTime = 1562246434198
stmt = null
ne = {NucleusException@19314} "org.datanucleus.exceptions.NucleusException: Cannot add `SERDES`.`SERDE_ID` as referenced FK column for `CDS`"
result = null
candidateClass = {Class@12512} "class org.apache.hadoop.hive.metastore.model.MTableColumnStatistics"
datastoreCompilation = {RDBMSQueryCompilation@19264} 
statementReturnsEmpty = false
ec = {ExecutionContextThreadedImpl@19216} 
subclasses = true

这个报错主要是

Cannot add `SERDES`.`SERDE_ID` as referenced FK column for `CDS`"

一看这个就是HIVE元数据报错,但是为什么会出现外键问题我一直思考不出来,所以只能Debug源代码。这个现象开始出现在SparkSession无法查询到表,但是也没有报错信息,这个可是太恶心了,因为找不到堆栈问题,经过测试发现建表的时候会有堆栈信息,然后根据堆栈找到如下代码。

org.datanucleus.store.rdbms.table.TableImpl;
	private Map getExistingForeignKeys(Connection conn)
	    throws SQLException
	    {
	        HashMap foreignKeysByName = new HashMap();
	        if (tableExistsInDatastore(conn))
	        {
	            StoreSchemaHandler handler = storeMgr.getSchemaHandler();
	            IdentifierFactory idFactory = storeMgr.getIdentifierFactory();
	            RDBMSTableFKInfo tableFkInfo = (RDBMSTableFKInfo)handler.getSchemaData(conn, "foreign-keys", 
	                new Object[] {this});
	            Iterator fksIter = tableFkInfo.getChildren().iterator();
	            while (fksIter.hasNext())
	            {
	                ForeignKeyInfo fkInfo = (ForeignKeyInfo)fksIter.next();
	                DatastoreIdentifier fkIdentifier;
	                String fkName = (String)fkInfo.getProperty("fk_name");
	                if (fkName == null)
	                {
	                    fkIdentifier = idFactory.newForeignKeyIdentifier(this, foreignKeysByName.size());
	                }
	                else
	                {
	                    fkIdentifier = idFactory.newIdentifier(IdentifierType.FOREIGN_KEY, fkName);
	                }
	    
	                short deferrability = ((Short)fkInfo.getProperty("deferrability")).shortValue();
	                boolean initiallyDeferred = deferrability == DatabaseMetaData.importedKeyInitiallyDeferred;
	                ForeignKey fk = (ForeignKey) foreignKeysByName.get(fkIdentifier);
	                if (fk == null)
	                {
	                    fk = new ForeignKey(initiallyDeferred);
	                    fk.setName(fkIdentifier.getIdentifierName());
	                    foreignKeysByName.put(fkIdentifier, fk);
	                }
	    
	                String pkTableName = (String)fkInfo.getProperty("pk_table_name");
	                DatastoreClass refTable = storeMgr.getDatastoreClass(
	                    idFactory.newTableIdentifier(pkTableName));
	                if (refTable != null)
	                {
	                    String fkColumnName = (String)fkInfo.getProperty("fk_column_name");
	                    String pkColumnName = (String)fkInfo.getProperty("pk_column_name");
	                    DatastoreIdentifier colName = idFactory.newIdentifier(IdentifierType.COLUMN, fkColumnName);
	                    DatastoreIdentifier refColName = idFactory.newIdentifier(IdentifierType.COLUMN, pkColumnName);
	                    Column col = columnsByName.get(colName);
	                    Column refCol = refTable.getColumn(refColName);
	                    if (col != null && refCol != null)
	                    {
	                        fk.addColumn(col, refCol);
	                    }
	                    else
	                    {
	                        //TODO throw exception?
	                    }
	                }
	            }
	        }
	        return foreignKeysByName;
	    }

	public void setColumn(int seq, Column col, Column refCol)
	    {
	        if (table == null)
	        {
	            table = col.getTable();
	            refTable = (DatastoreClass) refCol.getTable();
	            dba = table.getStoreManager().getDatastoreAdapter();
	        }
	        else
	        {
	            if (!table.equals(col.getTable()))
	            {
	                throw new NucleusException("Cannot add " + col + " as FK column for " + table).setFatal();
	            }
	            if (!refTable.equals(refCol.getTable()))
	            {
	                throw new NucleusException("Cannot add " + refCol + " as referenced FK column for " + refTable).setFatal();
	            }
	        }
	setMinSize(columns, seq + 1);
	setMinSize(refColumns, seq + 1);
	 columns.set(seq, col);
	refColumns.set(seq, refCol);
	}

其实这个也没事,但是这个东西恶心在他扫描的是整个库,不是连接库,所以我这个库里有以前版本的hive元数据表,他把他那里的外键也算进去了,所以在检验的时候报错,删除以前的库就正常了,我只能说太恶心了

  • 2
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值