Apache Impala Shell命令参数学习笔记

环境准备

Impala-shell命令参数
主节点hadoop03启动以下三个服务进程

service impala-state-store start
service impala-catalog start
service impala-server start

从节点启动hadoop01与hadoop02启动impala-server

service  impala-server  start

impala-shell外部命令

所谓的外部命令指的是不需要进入到impala-shell交互命令行当中即可执行的命令参数。

impala-shell后面执行的时候可以带很多参数。

你可以在启动 impala-shell 时设置,用于修改命令执行环境。

impala-shell –h可以查看帮助手册:

[root@hadoop03 ~]# impala-shell -h
Usage: impala_shell.py [options]

Options:
  -h, --help            show this help message and exit
  -i IMPALAD, --impalad=IMPALAD
                        <host:port> of impalad to connect to
                        [default: hadoop03.Hadoop.com:21000]
  -q QUERY, --query=QUERY
                        Execute a query without the shell [default: none]
  -f QUERY_FILE, --query_file=QUERY_FILE
                        Execute the queries in the query file, delimited by ;.
                        If the argument to -f is "-", then queries are read
                        from stdin and terminated with ctrl-d. [default: none]
  -k, --kerberos        Connect to a kerberized impalad [default: False]
  -o OUTPUT_FILE, --output_file=OUTPUT_FILE
                        If set, query results are written to the given file.
                        Results from multiple semicolon-terminated queries
                        will be appended to the same file [default: none]
  -B, --delimited       Output rows in delimited mode [default: False]
  --print_header        Print column names in delimited mode when pretty-
                        printed. [default: False]
  --output_delimiter=OUTPUT_DELIMITER
                        Field delimiter to use for output in delimited mode
                        [default: \t]
  -s KERBEROS_SERVICE_NAME, --kerberos_service_name=KERBEROS_SERVICE_NAME
                        Service name of a kerberized impalad [default: impala]
  -V, --verbose         Verbose output [default: True]
  -p, --show_profiles   Always display query profiles after execution
                        [default: False]
  --quiet               Disable verbose output [default: False]
  -v, --version         Print version information [default: False]
  -c, --ignore_query_failure
                        Continue on query failure [default: False]
  -r, --refresh_after_connect
                        Refresh Impala catalog after connecting
                        [default: False]
  -d DEFAULT_DB, --database=DEFAULT_DB
                        Issues a use database command on startup
                        [default: none]
  -l, --ldap            Use LDAP to authenticate with Impala. Impala must be
                        configured to allow LDAP authentication.
                        [default: False]
  -u USER, --user=USER  User to authenticate with. [default: root]
  --ssl                 Connect to Impala via SSL-secured connection
                        [default: False]
  --ca_cert=CA_CERT     Full path to certificate file used to authenticate
                        Impala's SSL certificate. May either be a copy of
                        Impala's certificate (for self-signed certs) or the
                        certificate of a trusted third-party CA. If not set,
                        but SSL is enabled, the shell will NOT verify Impala's
                        server certificate [default: none]
  --config_file=CONFIG_FILE
                        Specify the configuration file to load options. The
                        following sections are used: [impala],
                        [impala.query_options]. Section names are case
                        sensitive. Specifying this option within a config file
                        will have no effect. Only specify this as an option in
                        the commandline. [default: /root/.impalarc]
  --live_summary        Print a query summary every 1s while the query is
                        running. [default: False]
  --live_progress       Print a query progress every 1s while the query is
                        running. [default: False]
  --auth_creds_ok_in_clear
                        If set, LDAP authentication may be used with an
                        insecure connection to Impala. WARNING: Authentication
                        credentials will therefore be sent unencrypted, and
                        may be vulnerable to attack. [default: none]
  --ldap_password_cmd=LDAP_PASSWORD_CMD
                        Shell command to run to retrieve the LDAP password
                        [default: none]
  --var=KEYVAL          Defines a variable to be used within the Impala
                        session. Can be used multiple times to set different
                        variables. It must follow the pattern "KEY=VALUE", KEY
                        starts with an alphabetic character and contains
                        alphanumeric characters or underscores. [default:
                        none]
  -Q QUERY_OPTIONS, --query_option=QUERY_OPTIONS
                        Sets the default for a query option. Can be used
                        multiple times to set different query options. It must
                        follow the pattern "KEY=VALUE", KEY must be a valid
                        query option. Valid query options  can be listed by
                        command 'set'. [default: none]

impala-shell –r

impala-shell –r刷新impala元数据,与建立连接后执行 REFRESH 语句效果相同

[root@hadoop03 ~]# impala-shell -r                 
Starting Impala Shell without Kerberos authentication
Connected to hadoop03.Hadoop.com:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
Invalidating Metadata
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

The '-B' command line flag turns off pretty-printing for query results. Use this
flag to remove formatting from results you want to save for later, or to benchmark
Impala.
***********************************************************************************
+==========================================================================+
| DEPRECATION WARNING:                                                     |
| -r/--refresh_after_connect is deprecated and will be removed in a future |
| version of Impala shell.                                                 |
+==========================================================================+
Query: invalidate metadata
Query submitted at: 2019-12-10 17:08:38 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=dc439750e99f7571:4b61d47200000000
Fetched 0 row(s) in 3.98s

impala-shell –f

impala-shell –f 文件路径 执行指定的sql查询文件。

[root@hadoop03 home]# vi test.sql
use testdb;
select * from score;
~                                                                                                              
"test.sql" [New] 2L, 33C written
[root@hadoop03 home]# impala-shell -f test.sql
Starting Impala Shell without Kerberos authentication
Connected to hadoop03.Hadoop.com:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
Query: use testdb
Query: select * from score
Query submitted at: 2019-12-10 17:14:14 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=ea4fab11c1f44549:51d84e4300000000
+------+------+---------+
| s_id | c_id | s_score |
+------+------+---------+
| 01   | 01   | 70      |
| 01   | 02   | 90      |
| 01   | 03   | 97      |
| 02   | 01   | 68      |
| 02   | 02   | 60      |
| 02   | 03   | 85      |
| 03   | 01   | 80      |
| 03   | 02   | 80      |
| 03   | 03   | 80      |
| 04   | 01   | 50      |
| 04   | 02   | 30      |
| 04   | 03   | 20      |
| 05   | 01   | 76      |
| 05   | 02   | 87      |
| 06   | 01   | 31      |
| 06   | 03   | 34      |
| 07   | 02   | 89      |
| 07   | 03   | 98      |
+------+------+---------+
Fetched 18 row(s) in 0.12s
[root@hadoop03 home]# 

impala-shell –i

impala-shell –i指定连接运行 impalad 守护进程的主机。默认端口是 21000。
你可以连接到集群中运行 impalad 的任意主机。

然而,只有在主节点上,只能连接主节点hadoop03,其它节点无法连接。

[root@hadoop03 impala]# impala-shell -i hadoop01    
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to hadoop01:21000
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

The HISTORY command lists all shell commands in chronological order.
***********************************************************************************
[Not connected] > 
[18]+  Stopped                 impala-shell -i hadoop01
[root@hadoop03 impala]# clear
[root@hadoop03 impala]# impala-shell -i hadoop01
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to hadoop01:21000
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

To see more tips, run the TIP command.
***********************************************************************************
[Not connected] > 
[19]+  Stopped                 impala-shell -i hadoop01
[root@hadoop03 impala]# impala-shell -i hadoop02
Starting Impala Shell without Kerberos authentication
Error connecting: TTransportException, Could not connect to hadoop02:21000
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

The HISTORY command lists all shell commands in chronological order.
***********************************************************************************
[Not connected] > 
[20]+  Stopped                 impala-shell -i hadoop02
[root@hadoop03 impala]# impala-shell -i hadoop03
Starting Impala Shell without Kerberos authentication
Connected to hadoop03:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

To see more tips, run the TIP command.
***********************************************************************************
[hadoop03:21000] > 

impala-shell –o

impala-shell –o保存执行结果到文件当中去。

[root@hadoop03 home]# impala-shell -o a.txt
Starting Impala Shell without Kerberos authentication
Connected to hadoop03.Hadoop.com:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

You can run a single query from the command line using the '-q' option.
***********************************************************************************
[hadoop03.Hadoop.com:21000] > use testdb;
Query: use testdb
[hadoop03.Hadoop.com:21000] > select * from score;
Query: select * from score
Query submitted at: 2019-12-10 17:45:52 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=6b49073c22ee22be:50fb720a00000000
Fetched 18 row(s) in 7.33s
[hadoop03.Hadoop.com:21000] > 
[22]+  Stopped                 impala-shell -o a.txt
[root@hadoop03 home]# ll
总用量 8
-rw-r--r-- 1 root root 572 1210 17:45 a.txt
-rw-r--r-- 1 root root  33 1210 17:13 test.sql
[root@hadoop03 home]# cat a.txt
+------+------+---------+
| s_id | c_id | s_score |
+------+------+---------+
| 01   | 01   | 70      |
| 01   | 02   | 90      |
| 01   | 03   | 97      |
| 02   | 01   | 68      |
| 02   | 02   | 60      |
| 02   | 03   | 85      |
| 03   | 01   | 80      |
| 03   | 02   | 80      |
| 03   | 03   | 80      |
| 04   | 01   | 50      |
| 04   | 02   | 30      |
| 04   | 03   | 20      |
| 05   | 01   | 76      |
| 05   | 02   | 87      |
| 06   | 01   | 31      |
| 06   | 03   | 34      |
| 07   | 02   | 89      |
| 07   | 03   | 98      |
+------+------+---------+
[root@hadoop03 home]# 

impala-shell内部命令

所谓内部命令是指,进入impala-shell命令行之后可以执行的语法。

help

[hadoop03.Hadoop.com:21000] > help;

Documented commands (type help <topic>):
========================================
compute  describe  explain  profile  rerun   set    show  unset  values   with
connect  exit      history  quit     select  shell  tip   use    version

Undocumented commands:
======================
alter   delete  drop  insert  source  summary  upsert
create  desc    help  load    src     update 

connect hostname

connect hostname 连接到指定的机器impalad上去执行。

[Not connected] > connect hadoop03;
Connected to hadoop03:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
[hadoop03:21000] > connect hadoop01;
Error connecting: TTransportException, Could not connect to hadoop01:21000
[Not connected] > connect hadoop02;
Error connecting: TTransportException, Could not connect to hadoop02:21000
[Not connected] > 

refresh dbname.tablename

refresh dbname.tablename增量刷新,刷新某一张表的元数据,
主要用于刷新hive当中数据表里面的数据改变的情况。
[hadoop03:21000] > refresh testdb.score;
Query: refresh testdb.score
Query submitted at: 2019-12-10 17:54:05 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=694f92480dbbdae1:cf8453c300000000
Fetched 0 row(s) in 2.21s

invalidate metadata

invalidate  metadata全量刷新,性能消耗较大,
主要用于hive当中新建数据库或者数据库表的时候来进行刷新。
[hadoop03:21000] > invalidate metadata;
Query: invalidate metadata
Query submitted at: 2019-12-10 17:54:53 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=9649dc661712d739:e61cc1ac00000000
Fetched 0 row(s) in 5.96s

quit/exit

quit/exit命令 从Impala shell中弹出
[hadoop03:21000] > quit;
Goodbye root
[root@hadoop03 home]# impala-shell 
Starting Impala Shell without Kerberos authentication
Connected to hadoop03.Hadoop.com:21000
Server version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
***********************************************************************************
Welcome to the Impala shell.
(Impala Shell v2.11.0-cdh5.14.0 (d682065) built on Sat Jan  6 13:27:16 PST 2018)

The SET command shows the current value of all shell and query options.
***********************************************************************************
[hadoop03.Hadoop.com:21000] > exit;
Goodbye root
[root@hadoop03 home]# 

explain

explain 命令 用于查看sql语句的执行计划。
[hadoop03.Hadoop.com:21000] > use testdb;
Query: use testdb
[hadoop03.Hadoop.com:21000] > explain select * from score;
Query: explain select * from score
+------------------------------------------------------------------------------------+
| Explain String                                                                     |
+------------------------------------------------------------------------------------+
| Max Per-Host Resource Reservation: Memory=0B                                       |
| Per-Host Resource Estimates: Memory=32.00MB                                        |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| testdb.score                                                                       |
|                                                                                    |
| PLAN-ROOT SINK                                                                     |
| |                                                                                  |
| 01:EXCHANGE [UNPARTITIONED]                                                        |
| |                                                                                  |
| 00:SCAN HDFS [testdb.score]                                                        |
|    partitions=1/1 files=1 size=178B                                                |
+------------------------------------------------------------------------------------+
Fetched 11 row(s) in 4.59s
[hadoop03.Hadoop.com:21000] > 

explain的值可以设置成0,1,2,3等几个值,其中3级别是最高的,可以打印出最全的信息
set explain_level=3;

[hadoop03.Hadoop.com:21000] > set explain_level=3;
EXPLAIN_LEVEL set to 3
[hadoop03.Hadoop.com:21000] > explain select * from score;
Query: explain select * from score
+------------------------------------------------------------------------------------+
| Explain String                                                                     |
+------------------------------------------------------------------------------------+
| Max Per-Host Resource Reservation: Memory=0B                                       |
| Per-Host Resource Estimates: Memory=32.00MB                                        |
| WARNING: The following tables are missing relevant table and/or column statistics. |
| testdb.score                                                                       |
|                                                                                    |
| F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1                              |
| Per-Host Resources: mem-estimate=0B mem-reservation=0B                             |
|   PLAN-ROOT SINK                                                                   |
|   |  mem-estimate=0B mem-reservation=0B                                            |
|   |                                                                                |
|   01:EXCHANGE [UNPARTITIONED]                                                      |
|      mem-estimate=0B mem-reservation=0B                                            |
|      tuple-ids=0 row-size=34B cardinality=unavailable                              |
|                                                                                    |
| F00:PLAN FRAGMENT [RANDOM] hosts=1 instances=1                                     |
| Per-Host Resources: mem-estimate=32.00MB mem-reservation=0B                        |
|   DATASTREAM SINK [FRAGMENT=F01, EXCHANGE=01, UNPARTITIONED]                       |
|   |  mem-estimate=0B mem-reservation=0B                                            |
|   00:SCAN HDFS [testdb.score, RANDOM]                                              |
|      partitions=1/1 files=1 size=178B                                              |
|      stats-rows=unavailable extrapolated-rows=disabled                             |
|      table stats: rows=unavailable size=178B                                       |
|      column stats: unavailable                                                     |
|      mem-estimate=32.00MB mem-reservation=0B                                       |
|      tuple-ids=0 row-size=34B cardinality=unavailable                              |
+------------------------------------------------------------------------------------+
Fetched 25 row(s) in 0.82s
[hadoop03.Hadoop.com:21000] > 

profile

profile命令执行sql语句之后执行,可以
打印出更加详细的执行步骤,主要用于查询结果的查看,集群的调优等。

[hadoop03.Hadoop.com:21000] > select * from score;
Query: select * from score
Query submitted at: 2019-12-10 18:01:30 (Coordinator: http://hadoop03:25000)
Query progress can be monitored at: http://hadoop03:25000/query_plan?query_id=aa4dfb0b5a4a165d:76b84d5700000000
+------+------+---------+
| s_id | c_id | s_score |
+------+------+---------+
| 01   | 01   | 70      |
| 01   | 02   | 90      |
| 01   | 03   | 97      |
| 02   | 01   | 68      |
| 02   | 02   | 60      |
| 02   | 03   | 85      |
| 03   | 01   | 80      |
| 03   | 02   | 80      |
| 03   | 03   | 80      |
| 04   | 01   | 50      |
| 04   | 02   | 30      |
| 04   | 03   | 20      |
| 05   | 01   | 76      |
| 05   | 02   | 87      |
| 06   | 01   | 31      |
| 06   | 03   | 34      |
| 07   | 02   | 89      |
| 07   | 03   | 98      |
+------+------+---------+
Fetched 18 row(s) in 0.42s
[hadoop03.Hadoop.com:21000] > profile;
Query Runtime Profile:
Query (id=aa4dfb0b5a4a165d:76b84d5700000000):
  Summary:
    Session ID: e14bcb133d8b3ba7:7faa18ffebbda0be
    Session Type: BEESWAX
    Start Time: 2019-12-10 18:01:30.245469000
    End Time: 2019-12-10 18:01:30.669098000
    Query Type: QUERY
    Query State: FINISHED
    Query Status: OK
    Impala Version: impalad version 2.11.0-cdh5.14.0 RELEASE (build d68206561bce6b26762d62c01a78e6cd27aa7690)
    User: root
    Connected User: root
    Delegated User: 
    Network Address: ::ffff:192.168.100.203:60580
    Default Db: testdb
    Sql Statement: select * from score
    Coordinator: hadoop03:22000
    Query Options (set by configuration): EXPLAIN_LEVEL=3
    Query Options (set by configuration and planner): EXPLAIN_LEVEL=3,MT_DOP=0
    Plan: 
----------------
Max Per-Host Resource Reservation: Memory=0B
Per-Host Resource Estimates: Memory=32.00MB
WARNING: The following tables are missing relevant table and/or column statistics.
testdb.score

F01:PLAN FRAGMENT [UNPARTITIONED] hosts=1 instances=1
|  Per-Host Resources: mem-estimate=0B mem-reservation=0B
PLAN-ROOT SINK
|  mem-estimate=0B mem-reservation=0B
|
01:EXCHANGE [UNPARTITIONED]
|  mem-estimate=0B mem-reservation=0B
|  tuple-ids=0 row-size=34B cardinality=unavailable
|
F00:PLAN FRAGMENT [RANDOM] hosts=1 instances=1
Per-Host Resources: mem-estimate=32.00MB mem-reservation=0B
00:SCAN HDFS [testdb.score, RANDOM]
   partitions=1/1 files=1 size=178B
   stats-rows=unavailable extrapolated-rows=disabled
   table stats: rows=unavailable size=178B
   column stats: unavailable
   mem-estimate=32.00MB mem-reservation=0B
   tuple-ids=0 row-size=34B cardinality=unavailable
----------------
    Estimated Per-Host Mem: 33554432
    Tables Missing Stats: testdb.score
    Per Host Min Reservation: hadoop03:22000(0) 
    Request Pool: default-pool
    Admission result: Admitted immediately
    ExecSummary: 
Operator       #Hosts   Avg Time   Max Time  #Rows  Est. #Rows   Peak Mem  Est. Peak Mem  Detail        
--------------------------------------------------------------------------------------------------------
01:EXCHANGE         1  381.453ms  381.453ms     18          -1          0              0  UNPARTITIONED 
00:SCAN HDFS        1  340.137ms  340.137ms     18          -1  187.00 KB       32.00 MB  testdb.score  
    Errors: 
    Planner Timeline: 2.047ms
       - Analysis finished: 711.072us (711.072us)
       - Value transfer graph computed: 768.276us (57.204us)
       - Single node plan created: 1.112ms (344.471us)
       - Runtime filters computed: 1.158ms (45.654us)
       - Distributed plan created: 1.180ms (22.241us)
       - Planning finished: 2.047ms (867.314us)
    Query Timeline: 424.235ms
       - Query submitted: 21.068us (21.068us)
       - Planning finished: 2.885ms (2.864ms)
       - Submit for admission: 2.973ms (88.727us)
       - Completed admission: 2.977ms (3.982us)
       - Ready to start on 1 backends: 3.060ms (82.516us)
       - All 1 execution backends (2 fragment instances) started: 3.326ms (266.454us)
       - Rows available: 394.491ms (391.164ms)
       - First row fetched: 421.274ms (26.782ms)
       - Unregister query: 423.631ms (2.357ms)
     - ComputeScanRangeAssignmentTimer: 8.474us
  ImpalaServer:
     - ClientFetchWaitTimer: 28.251ms
     - RowMaterializationTimer: 873.420us
  Execution Profile aa4dfb0b5a4a165d:76b84d5700000000:(Total: 392.250ms, non-child: 0.000ns, % non-child: 0.00%)
    Number of filters: 0
    Filter routing table: 
 ID  Src. Node  Tgt. Node(s)  Target type  Partition filter  Pending (Expected)  First arrived  Completed   Enabled
-------------------------------------------------------------------------------------------------------------------

    Backend startup latencies: Count: 1, min / max: 0 / 0, 25th %-ile: 0, 50th %-ile: 0, 75th %-ile: 0, 90th %-ile: 0, 95th %-ile: 0, 99.9th %-ile: 0
    Per Node Peak Memory Usage: hadoop03:22000(254.45 KB) 
     - FiltersReceived: 0 (0)
     - FinalizationTimer: 0.000ns
     - NumBackends: 1 (1)
     - NumFragmentInstances: 2 (2)
     - NumFragments: 2 (2)
    Averaged Fragment F01:(Total: 417.579ms, non-child: 27.190ms, % non-child: 6.51%)
      split sizes:  min: 0, max: 0, avg: 0, stddev: 0
      completion times: min:418.400ms  max:418.400ms  mean: 418.400ms  stddev:0.000ns
      execution rates: min:0.00 /sec  max:0.00 /sec  mean:0.00 /sec  stddev:0.00 /sec
      num instances: 1
       - AverageThreadTokens: 0.00 
       - BloomFilterBytes: 0
       - PeakMemoryUsage: 12.14 KB (12432)
       - PeakReservation: 0
       - PeakUsedReservation: 0
       - PerHostPeakMemUsage: 254.45 KB (260560)
       - RowsProduced: 18 (18)
       - TotalNetworkReceiveTime: 381.451ms
       - TotalNetworkSendTime: 0.000ns
       - TotalStorageWaitTime: 0.000ns
       - TotalThreadsInvoluntaryContextSwitches: 0 (0)
       - TotalThreadsTotalWallClockTime: 408.401ms
         - TotalThreadsSysTime: 0.000ns
         - TotalThreadsUserTime: 0.000ns
       - TotalThreadsVoluntaryContextSwitches: 3 (3)
      Fragment Instance Lifecycle Timings:
         - ExecTime: 27.034ms
           - ExecTreeExecTime: 89.602us
         - OpenTime: 381.371ms
           - ExecTreeOpenTime: 381.365ms
         - PrepareTime: 9.151ms
           - ExecTreePrepareTime: 12.944us
      PLAN_ROOT_SINK:
         - PeakMemoryUsage: 0
      CodeGen:(Total: 8.935ms, non-child: 8.935ms, % non-child: 100.00%)
         - CodegenTime: 0.000ns
         - CompileTime: 0.000ns
         - LoadTime: 0.000ns
         - ModuleBitcodeSize: 1.86 MB (1950880)
         - NumFunctions: 0 (0)
         - NumInstructions: 0 (0)
         - OptimizationTime: 0.000ns
         - PeakMemoryUsage: 0
         - PrepareTime: 8.530ms
      EXCHANGE_NODE (id=1):(Total: 381.453ms, non-child: 381.453ms, % non-child: 100.00%)
         - ConvertRowBatchTime: 790.000ns
         - PeakMemoryUsage: 0
         - RowsReturned: 18 (18)
         - RowsReturnedRate: 47.00 /sec
        DataStreamReceiver:
           - BytesReceived: 433.00 B (433)
           - DeserializeRowBatchTimer: 2.780us
           - FirstBatchArrivalWaitTime: 381.361ms
           - PeakMemoryUsage: 4.14 KB (4240)
           - SendersBlockedTimer: 0.000ns
           - SendersBlockedTotalTimer(*): 0.000ns
    Coordinator Fragment F01:
      Instance aa4dfb0b5a4a165d:76b84d5700000000 (host=hadoop03:22000):(Total: 417.579ms, non-child: 27.190ms, % non-child: 6.51%)
        MemoryUsage(500.000ms): 8.00 KB
         - AverageThreadTokens: 0.00 
         - BloomFilterBytes: 0
         - PeakMemoryUsage: 12.14 KB (12432)
         - PeakReservation: 0
         - PeakUsedReservation: 0
         - PerHostPeakMemUsage: 254.45 KB (260560)
         - RowsProduced: 18 (18)
         - TotalNetworkReceiveTime: 381.451ms
         - TotalNetworkSendTime: 0.000ns
         - TotalStorageWaitTime: 0.000ns
         - TotalThreadsInvoluntaryContextSwitches: 0 (0)
         - TotalThreadsTotalWallClockTime: 408.401ms
           - TotalThreadsSysTime: 0.000ns
           - TotalThreadsUserTime: 0.000ns
         - TotalThreadsVoluntaryContextSwitches: 3 (3)
        Fragment Instance Lifecycle Timings:
           - ExecTime: 27.034ms
             - ExecTreeExecTime: 89.602us
           - OpenTime: 381.371ms
             - ExecTreeOpenTime: 381.365ms
           - PrepareTime: 9.151ms
             - ExecTreePrepareTime: 12.944us
        PLAN_ROOT_SINK:
           - PeakMemoryUsage: 0
        CodeGen:(Total: 8.935ms, non-child: 8.935ms, % non-child: 100.00%)
           - CodegenTime: 0.000ns
           - CompileTime: 0.000ns
           - LoadTime: 0.000ns
           - ModuleBitcodeSize: 1.86 MB (1950880)
           - NumFunctions: 0 (0)
           - NumInstructions: 0 (0)
           - OptimizationTime: 0.000ns
           - PeakMemoryUsage: 0
           - PrepareTime: 8.530ms
        EXCHANGE_NODE (id=1):(Total: 381.453ms, non-child: 381.453ms, % non-child: 100.00%)
           - ConvertRowBatchTime: 790.000ns
           - PeakMemoryUsage: 0
           - RowsReturned: 18 (18)
           - RowsReturnedRate: 47.00 /sec
          DataStreamReceiver:
            BytesReceived(500.000ms): 0
             - BytesReceived: 433.00 B (433)
             - DeserializeRowBatchTimer: 2.780us
             - FirstBatchArrivalWaitTime: 381.361ms
             - PeakMemoryUsage: 4.14 KB (4240)
             - SendersBlockedTimer: 0.000ns
             - SendersBlockedTotalTimer(*): 0.000ns
    Averaged Fragment F00:(Total: 390.413ms, non-child: 1.114ms, % non-child: 0.29%)
      split sizes:  min: 178.00 B, max: 178.00 B, avg: 178.00 B, stddev: 0
      completion times: min:391.538ms  max:391.538ms  mean: 391.538ms  stddev:0.000ns
      execution rates: min:454.00 B/sec  max:454.00 B/sec  mean:454.00 B/sec  stddev:0.62 B/sec
      num instances: 1
       - AverageThreadTokens: 2.00 
       - BloomFilterBytes: 0
       - PeakMemoryUsage: 246.45 KB (252368)
       - PeakReservation: 0
       - PeakUsedReservation: 0
       - PerHostPeakMemUsage: 254.45 KB (260560)
       - RowsProduced: 18 (18)
       - TotalNetworkReceiveTime: 0.000ns
       - TotalNetworkSendTime: 417.137us
       - TotalStorageWaitTime: 339.705ms
       - TotalThreadsInvoluntaryContextSwitches: 11 (11)
       - TotalThreadsTotalWallClockTime: 720.434ms
         - TotalThreadsSysTime: 0.000ns
         - TotalThreadsUserTime: 39.994ms
       - TotalThreadsVoluntaryContextSwitches: 10 (10)
      Fragment Instance Lifecycle Timings:
         - ExecTime: 340.529ms
           - ExecTreeExecTime: 340.033ms
         - OpenTime: 39.964ms
           - ExecTreeOpenTime: 21.347us
         - PrepareTime: 9.902ms
           - ExecTreePrepareTime: 36.079us
      DataStreamSender (dst_id=1):(Total: 105.872us, non-child: 105.872us, % non-child: 100.00%)
         - BytesSent: 433.00 B (433)
         - NetworkThroughput(*): 13.39 MB/sec
         - OverallThroughput: 3.90 MB/sec
         - PeakMemoryUsage: 3.45 KB (3536)
         - RowsReturned: 18 (18)
         - SerializeBatchTime: 8.186us
         - TransmitDataRPCTime: 30.834us
         - UncompressedRowBatchSize: 882.00 B (882)
      CodeGen:(Total: 49.055ms, non-child: 49.055ms, % non-child: 100.00%)
         - CodegenTime: 1.070ms
         - CompileTime: 13.675ms
         - LoadTime: 0.000ns
         - ModuleBitcodeSize: 1.86 MB (1950880)
         - NumFunctions: 29 (29)
         - NumInstructions: 470 (470)
         - OptimizationTime: 25.412ms
         - PeakMemoryUsage: 235.00 KB (240640)
         - PrepareTime: 9.617ms
      HDFS_SCAN_NODE (id=0):(Total: 340.137ms, non-child: 340.137ms, % non-child: 100.00%)
         - AverageHdfsReadThreadConcurrency: 0.00 
         - AverageScannerThreadConcurrency: 1.00 
         - BytesRead: 178.00 B (178)
         - BytesReadDataNodeCache: 0
         - BytesReadLocal: 178.00 B (178)
         - BytesReadRemoteUnexpected: 0
         - BytesReadShortCircuit: 178.00 B (178)
         - CachedFileHandlesHitCount: 0 (0)
         - CachedFileHandlesMissCount: 1 (1)
         - CollectionItemsRead: 0 (0)
         - DecompressionTime: 0.000ns
         - MaxCompressedTextFileLength: 0
         - NumDisksAccessed: 1 (1)
         - NumScannerThreadsStarted: 1 (1)
         - PeakMemoryUsage: 187.00 KB (191488)
         - PerReadThreadRawHdfsThroughput: 4.49 MB/sec
         - RemoteScanRanges: 0 (0)
         - RowBatchQueueGetWaitTime: 339.801ms
         - RowBatchQueuePutWaitTime: 0.000ns
         - RowsRead: 18 (18)
         - RowsReturned: 18 (18)
         - RowsReturnedRate: 52.00 /sec
         - ScanRangesComplete: 1 (1)
         - ScannerThreadsInvoluntaryContextSwitches: 6 (6)
         - ScannerThreadsTotalWallClockTime: 339.945ms
           - DelimiterParseTime: 2.267us
           - MaterializeTupleTime(*): 11.473us
           - ScannerThreadsSysTime: 0.000ns
           - ScannerThreadsUserTime: 0.000ns
         - ScannerThreadsVoluntaryContextSwitches: 3 (3)
         - TotalRawHdfsReadTime(*): 37.801us
         - TotalReadThroughput: 0.00 /sec
    Fragment F00:
      Instance aa4dfb0b5a4a165d:76b84d5700000001 (host=hadoop03:22000):(Total: 390.413ms, non-child: 1.114ms, % non-child: 0.29%)
        Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:1/178.00 B 
        MemoryUsage(500.000ms): 122.02 KB
        ThreadUsage(500.000ms): 2
         - AverageThreadTokens: 2.00 
         - BloomFilterBytes: 0
         - PeakMemoryUsage: 246.45 KB (252368)
         - PeakReservation: 0
         - PeakUsedReservation: 0
         - PerHostPeakMemUsage: 254.45 KB (260560)
         - RowsProduced: 18 (18)
         - TotalNetworkReceiveTime: 0.000ns
         - TotalNetworkSendTime: 417.137us
         - TotalStorageWaitTime: 339.705ms
         - TotalThreadsInvoluntaryContextSwitches: 11 (11)
         - TotalThreadsTotalWallClockTime: 720.434ms
           - TotalThreadsSysTime: 0.000ns
           - TotalThreadsUserTime: 39.994ms
         - TotalThreadsVoluntaryContextSwitches: 10 (10)
        Fragment Instance Lifecycle Timings:
           - ExecTime: 340.529ms
             - ExecTreeExecTime: 340.033ms
           - OpenTime: 39.964ms
             - ExecTreeOpenTime: 21.347us
           - PrepareTime: 9.902ms
             - ExecTreePrepareTime: 36.079us
        DataStreamSender (dst_id=1):(Total: 105.872us, non-child: 105.872us, % non-child: 100.00%)
           - BytesSent: 433.00 B (433)
           - NetworkThroughput(*): 13.39 MB/sec
           - OverallThroughput: 3.90 MB/sec
           - PeakMemoryUsage: 3.45 KB (3536)
           - RowsReturned: 18 (18)
           - SerializeBatchTime: 8.186us
           - TransmitDataRPCTime: 30.834us
           - UncompressedRowBatchSize: 882.00 B (882)
        CodeGen:(Total: 49.055ms, non-child: 49.055ms, % non-child: 100.00%)
           - CodegenTime: 1.070ms
           - CompileTime: 13.675ms
           - LoadTime: 0.000ns
           - ModuleBitcodeSize: 1.86 MB (1950880)
           - NumFunctions: 29 (29)
           - NumInstructions: 470 (470)
           - OptimizationTime: 25.412ms
           - PeakMemoryUsage: 235.00 KB (240640)
           - PrepareTime: 9.617ms
        HDFS_SCAN_NODE (id=0):(Total: 340.137ms, non-child: 340.137ms, % non-child: 100.00%)
          Hdfs split stats (<volume id>:<# splits>/<split lengths>): 0:1/178.00 B 
          ExecOption: TEXT Codegen Enabled, Codegen enabled: 1 out of 1
          Hdfs Read Thread Concurrency Bucket: 0:100% 1:0% 2:0% 3:0% 4:0% 
          File Formats: TEXT/NONE:1 
          BytesRead(500.000ms): 0
           - AverageHdfsReadThreadConcurrency: 0.00 
           - AverageScannerThreadConcurrency: 1.00 
           - BytesRead: 178.00 B (178)
           - BytesReadDataNodeCache: 0
           - BytesReadLocal: 178.00 B (178)
           - BytesReadRemoteUnexpected: 0
           - BytesReadShortCircuit: 178.00 B (178)
           - CachedFileHandlesHitCount: 0 (0)
           - CachedFileHandlesMissCount: 1 (1)
           - CollectionItemsRead: 0 (0)
           - DecompressionTime: 0.000ns
           - MaxCompressedTextFileLength: 0
           - NumDisksAccessed: 1 (1)
           - NumScannerThreadsStarted: 1 (1)
           - PeakMemoryUsage: 187.00 KB (191488)
           - PerReadThreadRawHdfsThroughput: 4.49 MB/sec
           - RemoteScanRanges: 0 (0)
           - RowBatchQueueGetWaitTime: 339.801ms
           - RowBatchQueuePutWaitTime: 0.000ns
           - RowsRead: 18 (18)
           - RowsReturned: 18 (18)
           - RowsReturnedRate: 52.00 /sec
           - ScanRangesComplete: 1 (1)
           - ScannerThreadsInvoluntaryContextSwitches: 6 (6)
           - ScannerThreadsTotalWallClockTime: 339.945ms
             - DelimiterParseTime: 2.267us
             - MaterializeTupleTime(*): 11.473us
             - ScannerThreadsSysTime: 0.000ns
             - ScannerThreadsUserTime: 0.000ns
           - ScannerThreadsVoluntaryContextSwitches: 3 (3)
           - TotalRawHdfsReadTime(*): 37.801us
           - TotalReadThroughput: 0.00 /sec

[hadoop03.Hadoop.com:21000] > 

如果在hive窗口中插入数据或者新建的数据库或者数据库表,那么在impala当中是不可直接查询,需要执行invalidate metadata以通知元数据的更新;

在impala-shell当中插入的数据,在impala当中是可以直接查询到的,不需要刷新数据库,其中使用的就是catalog这个服务的功能实现的,catalog是impala1.2版本之后增加的模块功能,主要作用就是同步impala之间的元数据。

更新操作通知Catalog,Catalog通过广播的方式通知其它的Impalad进程。默认情况下Catalog是异步加载元数据的,因此查询可能需要等待元数据加载完成之后才能进行(第一次加载)。

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值