1. 入口
Driver#compile 是编译的入口
private void compile(String command, boolean resetTaskIds, boolean deferClose) throws CommandProcessorResponse
2. 执行的操作步骤
- 变量替换
- 语法分析
- 语义分析
2.2 语法分析 -> 生成抽象语法树
Driver#compile
hookRunner.runBeforeParseHook(command);
ASTNode tree;
try {
tree = ParseUtils.parse(command, ctx);
} catch (ParseException e) {
parseError = true;
throw e;
} finally {
hookRunner.runAfterParseHook(command, parseError);
}
ParseUtils#parse
public static ASTNode parse(String command, Context ctx) throws ParseException {
return parse(command, ctx, null);
}
ParseUtils#parse
public static ASTNode parse(
String command, Context ctx, String viewFullyQualifiedName) throws ParseException {
ParseDriver pd = new ParseDriver();
ASTNode tree = pd.parse(command, ctx, viewFullyQualifiedName);
tree = findRootNonNullToken(tree);
handleSetColRefs(tree);
return tree;
}
ParseDriver#parse
public ASTNode parse(String command, Context ctx, String viewFullyQualifiedName)
throws ParseException {
HiveLexerX lexer = new HiveLexerX(new ANTLRNoCaseStringStream(command));
TokenRewriteStream tokens = new TokenRewriteStream(lexer);
if (ctx != null) {
if (viewFullyQualifiedName == null) {
// Top level query
ctx.setTokenRewriteStream(tokens);
} else {
// It is a view
ctx.addViewTokenRewriteStream(viewFullyQualifiedName, tokens);
}
lexer.setHiveConf(ctx.getConf());
}
HiveParser parser = new HiveParser(tokens);
if (ctx != null) {
parser.setHiveConf(ctx.getConf());
}
parser.setTreeAdaptor(adaptor);
HiveParser.statement_return r = null;
try {
r = parser.statement();
} catch (RecognitionException e) {
e.printStackTrace();
throw new ParseException(parser.errors);
}
if (lexer.getErrors().size() == 0 && parser.errors.size() == 0) {
LOG.debug("Parse Completed");
} else if (lexer.getErrors().size() != 0) {
throw new ParseException(lexer.getErrors());
} else {
throw new ParseException(parser.errors);
}
ASTNode tree = (ASTNode) r.getTree();
tree.setUnknownTokenBoundaries();
return tree;
}
2.3 语义分析
// PreAnalyzeHook
BaseSemanticAnalyzer sem = SemanticAnalyzerFactory.get(queryState, tree);
sem.analyze(tree, ctx);
// AfterAnalyzeHook
SemanticAnalyzerFactory.get
public static BaseSemanticAnalyzer get(QueryState queryState, ASTNode tree) throws SemanticException {
BaseSemanticAnalyzer sem = getInternal(queryState, tree);
if(queryState.getHiveOperation() == null) {
String query = queryState.getQueryString();
if(query != null && query.length() > 30) {
query = query.substring(0, 30);
}
String msg = "Unknown HiveOperation for query='" + query + "' queryId=" + queryState.getQueryId();
//throw new IllegalStateException(msg);
LOG.debug(msg);
}
return sem;
}
SemanticAnalyzerFactory.getInternal
如果启动基于代价的优化,则使用 CalcitePlanner,否则使用 SemanticAnalyzer。
private static BaseSemanticAnalyzer getInternal(QueryState queryState, ASTNode tree)
throws SemanticException {
if (tree.getToken() == null) {
throw new RuntimeException("Empty Syntax Tree");
} else {
HiveOperation opType = commandType.get(tree.getType());
queryState.setCommandType(opType);
switch (tree.getType()) {
default: { // Query
SemanticAnalyzer semAnalyzer = HiveConf
.getBoolVar(queryState.getConf(), HiveConf.ConfVars.HIVE_CBO_ENABLED) ?
new CalcitePlanner(queryState) : new SemanticAnalyzer(queryState);
return semAnalyzer;
}
}
}
}
本文详细介绍了Hive的编译流程,从Driver#compile作为入口开始,逐步深入到变量替换、语法分析和语义分析阶段。在语法分析中,通过ParseUtils.parse构建抽象语法树,并利用HiveParser进行语法规则解析。语义分析阶段,调用SemanticAnalyzer进行预分析和后分析,确保查询的正确性。整个过程涉及到了Hive的内部机制和优化策略。
857

被折叠的 条评论
为什么被折叠?



