[hadoop@hadoop3 ~]$ sqoop help
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:37:19 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
usage: sqoop COMMAND [ARGS]
Available commands: codegen Generate code to interact with database records create-hive-table Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS import-mainframe Import datasets from a mainframe server to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version information
See ‘sqoop help COMMAND’ for information on a specific command. [hadoop@hadoop3 ~]$
[hadoop@hadoop3 ~]$ sqoop help import
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:38:29 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
usage: sqoop import [GENERIC-ARGS] [TOOL-ARGS]
Common arguments: –connect <jdbc-uri> Specify JDBC connect string –connection-manager <class-name> Specify connection manager class name –connection-param-file <properties-file> Specify connection parameters file –driver <class-name> Manually specify JDBC driver class to use –hadoop-home <hdir> Override $HADOOP_MAPRED_HOME_ARG –hadoop-mapred-home <dir> Override $HADOOP_MAPRED_HOME_ARG –help Print usage instructions -P Read password from console –password <password> Set authentication password –password-alias <password-alias> Credential provider password alias –password-file <password-file> Set authentication password file path –relaxed-isolation Use read-uncommitted isolation for imports –skip-dist-cache Skip copying jars to distributed cache –username <username> Set authentication username –verbose Print more information while working
Import control arguments: –append Imports data in append mode –as-avrodatafile Imports data to Avro data files –as-parquetfile Imports data to Parquet files –as-sequencefile Imports data to SequenceFile s –as-textfile Imports data as plain text (default) –autoreset-to-one-mapper Reset the number of mappers to one mapper if no split key available –boundary-query <statement> Set boundary query for retrieving max and min value of the primary key –columns <col,col,col…> Columns to import from table –compression-codec <codec> Compression codec to use for import –delete-target-dir Imports data in delete mode –direct Use direct import fast path –direct-split-size <n> Split the input stream every ‘n’ bytes when importing in direct mode -e,–query <statement> Import results of SQL ‘statement’ –fetch-size <n> Set number ‘n’ of rows to fetch from the database when more rows are needed –inline-lob-limit <n> Set the maximum size for an inline LOB -m,–num-mappers <n> Use ‘n’ map tasks to import in parallel –mapreduce-job-name <name> Set name for generated mapreduce job –merge-key <column> Key column to use to join results –split-by <column-name> Column of the table used to split work units –table <table-name> Table to read –target-dir <dir> HDFS plain table destination –validate Validate the copy using the configured validator –validation-failurehandler <validation-failurehandler> Fully qualified class name for ValidationFa ilureHandler –validation-threshold <validation-threshold> Fully qualified class name for ValidationTh reshold –validator <validator> Fully qualified class name for the Validator –warehouse-dir <dir> HDFS parent for table destination –where <where clause> WHERE clause to use during import -z,–compress Enable compression
Incremental import arguments: –check-column <column> Source column to check for incremental change –incremental <import-type> Define an incremental import of type ‘append’ or ‘lastmodified’ –last-value <value> Last imported value in the incremental check column
Output line formatting arguments: –enclosed-by <char> Sets a required field enclosing character –escaped-by <char> Sets the escape character –fields-terminated-by <char> Sets the field separator character –lines-terminated-by <char> Sets the end-of-line character –mysql-delimiters Uses MySQL’s default delimiter set: fields: , lines: \n escaped-by: optionally-enclosed-by: ’ --optionally-enclosed-by <char> Sets a field enclosing character
Input parsing arguments: –input-enclosed-by <char> Sets a required field encloser –input-escaped-by <char> Sets the input escape character –input-fields-terminated-by <char> Sets the input field separator –input-lines-terminated-by <char> Sets the input end-of-line char –input-optionally-enclosed-by <char> Sets a field enclosing character
Hive arguments: –create-hive-table Fail if the target hive table exists –hive-database <database-name> Sets the database name to use when importing to hive –hive-delims-replacement <arg> Replace Hive record <span style=“color: #800080;”>0x01 and row delimiters (\n\r) from imported string fields with user-defined string –hive-drop-import-delims Drop Hive record <span style=“color: #800080;”>0x01 and row delimiters (\n\r) from imported string fields –hive-home <dir> Override $HIVE_HOME –hive-import Import tables into Hive (Uses Hive’s default delimiters if none are set.) –hive-overwrite Overwrite existing data in the Hive table –hive-partition-key <partition-key> Sets the partition key to use when importing to hive –hive-partition-value <partition-value> Sets the partition value to use when importing to hive –hive-table <table-name> Sets the table name to use when importing to hive –map-column-hive <arg> Override mapping for specific column to hive types.
HBase arguments: –column-family <family> Sets the target column family for the import –hbase-bulkload Enables HBase bulk loading –hbase-create-table If specified, create missing HBase tables –hbase-row-key <col> Specifies which input column to use as the row key –hbase-table <table> Import to <table> in HBase
HCatalog arguments: –hcatalog-database <arg> HCatalog database name –hcatalog-home <hdir> Override $HCAT_HOME –hcatalog-partition-keys <partition-key> Sets the partition keys to use when importing to hive –hcatalog-partition-values <partition-value> Sets the partition values to use when importing to hive –hcatalog-table <arg> HCatalog table name –hive-home <dir> Override $HIVE_HOME –hive-partition-key <partition-key> Sets the partition key to use when importing to hive –hive-partition-value <partition-value> Sets the partition value to use when importing to hive –map-column-hive <arg> Override mapping for specific column to hive types.
HCatalog import specific options: –create-hcatalog-table Create HCatalog before import –hcatalog-storage-stanza <arg> HCatalog storage stanza for table creation
Accumulo arguments: –accumulo-batch-size <size> Batch size in bytes –accumulo-column-family <family> Sets the target column family for the import –accumulo-create-table If specified, create missing Accumulo tables –accumulo-instance <instance> Accumulo instance name. –accumulo-max-latency <latency> Max write latency in milliseconds –accumulo-password <password> Accumulo password. –accumulo-row-key <col> Specifies which input column to use as the row key –accumulo-table <table> Import to <table> in Accumulo –accumulo-user <user> Accumulo user name. –accumulo-visibility <vis> Visibility token to be applied to all rows imported –accumulo-zookeepers <zookeepers> Comma-separated list of zookeepers (host:port)
Code generation arguments: –bindir <dir> Output directory for compiled objects –class-name <name> Sets the generated class name. This overrides –package-name. When combined with –jar-file, sets the input class. –input-null-non-string <null-str> Input null non-string representation –input-null-string <null-str> Input null string representation –jar-file <file> Disable code generation; use specified jar –map-column-java <arg> Override mapping for specific columns to java types –null-non-string <null-str> Null non-string representation –null-string <null-str> Null string representation –outdir <dir> Output directory for generated code –package-name <name> Put auto-generated classes in this package
Generic Hadoop command-line arguments: (must preceed any tool-specific arguments) Generic options supported are -conf <configuration file> specify an application configuration file -D <property=value> use value for given property -fs <local|namenode:port> specify a namenode -jt <local|resourcemanager:port> specify a ResourceManager -files <comma separated list of files> specify comma separated files to be copied to the map reduce cluster -libjars <comma separated list of jars> specify comma separated jar files to include in the classpath. -archives <comma separated list of archives> specify comma separated archives to be unarchived on the compute machines.
The general command line syntax is bin/hadoop command [genericOptions] [commandOptions]
At minimum, you must specify –connect and –table Arguments to mysqldump and other subprograms may be supplied after a ‘–’ on the command line. [hadoop@hadoop3 ~]$
View Code
示例
列出MySQL数据有哪些数据库
[hadoop@hadoop3 ~]$ sqoop list-databases \
> --connect jdbc:mysql://hadoop1:3306/ \
> --username root \
> --password root
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:43:51 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:43:51 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:43:51 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
information_schema
hivedb
mysql
performance_schema
test
[hadoop@hadoop3 ~]$
[hadoop@hadoop3 ~]$ sqoop list-tables \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:46:21 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:46:21 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:46:21 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
columns_priv
db
event
func
general_log
help_category
help_keyword
help_relation
help_topic
innodb_index_stats
innodb_table_stats
ndb_binlog_index
plugin
proc
procs_priv
proxies_priv
servers
slave_master_info
slave_relay_log_info
slave_worker_info
slow_log
tables_priv
time_zone
time_zone_leap_second
time_zone_name
time_zone_transition
time_zone_transition_type
user
[hadoop@hadoop3 ~]$
[hadoop@hadoop3 ~]$ sqoop create-hive-table \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root \
> --table help_keyword \
> --hive-table hk
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:50:20 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:50:20 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:50:20 INFO tool.BaseSqoopTool: Using Hive-specific delimiters for output. You can override
18/04/12 13:50:20 INFO tool.BaseSqoopTool: delimiters with --fields-terminated-by, etc.
18/04/12 13:50:20 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/12 13:50:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:50:21 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/12 13:50:23 INFO hive.HiveImport: Loading uploaded data into Hive
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Class path contains multiple SLF4J bindings.
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/apache-hive-2.3.3-bin/lib/log4j-slf4j-impl-2.6.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
18/04/12 13:50:34 INFO hive.HiveImport: SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
18/04/12 13:50:36 INFO hive.HiveImport:
18/04/12 13:50:36 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/home/hadoop/apps/apache-hive-2.3.3-bin/lib/hive-common-2.3.3.jar!/hive-log4j2.properties Async: true
18/04/12 13:50:50 INFO hive.HiveImport: OK
18/04/12 13:50:50 INFO hive.HiveImport: Time taken: 11.651 seconds
18/04/12 13:50:51 INFO hive.HiveImport: Hive import complete.
[hadoop@hadoop3 ~]$
[hadoop@hadoop3 ~]$ sqoop import \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root \
> --table help_keyword \
> -m 1
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 13:53:48 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 13:53:48 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 13:53:48 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/12 13:53:48 INFO tool.CodeGenTool: Beginning code generation
18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:53:49 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 13:53:49 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5
注: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.java使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
18/04/12 13:53:51 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/979d87b9521d0a09ee6620060a112d60/help_keyword.jar
18/04/12 13:53:51 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/04/12 13:53:51 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/04/12 13:53:51 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/04/12 13:53:51 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/04/12 13:53:51 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/12 13:53:52 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/04/12 13:53:53 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/04/12 13:53:58 INFO db.DBInputFormat: Using read commited transaction isolation
18/04/12 13:53:58 INFO mapreduce.JobSubmitter: number of splits:1
18/04/12 13:53:59 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0001
18/04/12 13:54:00 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0001
18/04/12 13:54:00 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0001/
18/04/12 13:54:00 INFO mapreduce.Job: Running job: job_1523510178850_0001
18/04/12 13:54:17 INFO mapreduce.Job: Job job_1523510178850_0001 running in uber mode : false
18/04/12 13:54:17 INFO mapreduce.Job: map 0% reduce 0%
18/04/12 13:54:33 INFO mapreduce.Job: map 100% reduce 0%
18/04/12 13:54:34 INFO mapreduce.Job: Job job_1523510178850_0001 completed successfully
18/04/12 13:54:35 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=142965
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=8264
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=12142
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=12142
Total vcore-milliseconds taken by all map tasks=12142
Total megabyte-milliseconds taken by all map tasks=12433408
Map-Reduce Framework
Map input records=619
Map output records=619
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=123
CPU time spent (ms)=1310
Physical memory (bytes) snapshot=93212672
Virtual memory (bytes) snapshot=2068234240
Total committed heap usage (bytes)=17567744
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=8264
18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Transferred 8.0703 KB in 41.8111 seconds (197.6507 bytes/sec)
18/04/12 13:54:35 INFO mapreduce.ImportJobBase: Retrieved 619 records.
[hadoop@hadoop3 ~]$
[hadoop@hadoop3 ~]$ sqoop import \
> --connect jdbc:mysql://hadoop1:3306/mysql \
> --username root \
> --password root \
> --table help_keyword \
> --target-dir /user/hadoop/myimport_add \
> --incremental append \
> --check-column help_keyword_id \
> --last-value 500 \
> -m 1
Warning: /home/hadoop/apps/sqoop-1.4.6/../hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: /home/hadoop/apps/sqoop-1.4.6/../accumulo does not exist! Accumulo imports will fail.
Please set $ACCUMULO_HOME to the root of your Accumulo installation.
18/04/12 22:01:07 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6
18/04/12 22:01:08 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
18/04/12 22:01:08 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset.
18/04/12 22:01:08 INFO tool.CodeGenTool: Beginning code generation
18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 22:01:08 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM `help_keyword` AS t LIMIT 1
18/04/12 22:01:08 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /home/hadoop/apps/hadoop-2.7.5
注: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.java使用或覆盖了已过时的 API。
注: 有关详细信息, 请使用 -Xlint:deprecation 重新编译。
18/04/12 22:01:11 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/a51619d1ef8c6e4b112a209326ed9e0f/help_keyword.jar
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hadoop-2.7.5/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop/apps/hbase-1.2.6/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
18/04/12 22:01:12 INFO tool.ImportTool: Maximal id query for free form incremental import: SELECT MAX(`help_keyword_id`) FROM `help_keyword`
18/04/12 22:01:12 INFO tool.ImportTool: Incremental import based on column `help_keyword_id`
18/04/12 22:01:12 INFO tool.ImportTool: Lower bound value: 500
18/04/12 22:01:12 INFO tool.ImportTool: Upper bound value: 618
18/04/12 22:01:12 WARN manager.MySQLManager: It looks like you are importing from mysql.
18/04/12 22:01:12 WARN manager.MySQLManager: This transfer can be faster! Use the --direct
18/04/12 22:01:12 WARN manager.MySQLManager: option to exercise a MySQL-specific fast path.
18/04/12 22:01:12 INFO manager.MySQLManager: Setting zero DATETIME behavior to convertToNull (mysql)
18/04/12 22:01:12 INFO mapreduce.ImportJobBase: Beginning import of help_keyword
18/04/12 22:01:12 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
18/04/12 22:01:12 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
18/04/12 22:01:17 INFO db.DBInputFormat: Using read commited transaction isolation
18/04/12 22:01:17 INFO mapreduce.JobSubmitter: number of splits:1
18/04/12 22:01:17 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1523510178850_0010
18/04/12 22:01:19 INFO impl.YarnClientImpl: Submitted application application_1523510178850_0010
18/04/12 22:01:19 INFO mapreduce.Job: The url to track the job: http://hadoop3:8088/proxy/application_1523510178850_0010/
18/04/12 22:01:19 INFO mapreduce.Job: Running job: job_1523510178850_0010
18/04/12 22:01:30 INFO mapreduce.Job: Job job_1523510178850_0010 running in uber mode : false
18/04/12 22:01:30 INFO mapreduce.Job: map 0% reduce 0%
18/04/12 22:01:40 INFO mapreduce.Job: map 100% reduce 0%
18/04/12 22:01:40 INFO mapreduce.Job: Job job_1523510178850_0010 completed successfully
18/04/12 22:01:41 INFO mapreduce.Job: Counters: 30
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=143200
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=87
HDFS: Number of bytes written=1576
HDFS: Number of read operations=4
HDFS: Number of large read operations=0
HDFS: Number of write operations=2
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=7188
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=7188
Total vcore-milliseconds taken by all map tasks=7188
Total megabyte-milliseconds taken by all map tasks=7360512
Map-Reduce Framework
Map input records=118
Map output records=118
Input split bytes=87
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=86
CPU time spent (ms)=870
Physical memory (bytes) snapshot=95576064
Virtual memory (bytes) snapshot=2068234240
Total committed heap usage (bytes)=18608128
File Input Format Counters
Bytes Read=0
File Output Format Counters
Bytes Written=1576
18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Transferred 1.5391 KB in 28.3008 seconds (55.6875 bytes/sec)
18/04/12 22:01:41 INFO mapreduce.ImportJobBase: Retrieved 118 records.
18/04/12 22:01:41 INFO util.AppendUtils: Creating missing output directory - myimport_add
18/04/12 22:01:41 INFO tool.ImportTool: Incremental import complete! To run another incremental import of all data following this import, supply the following arguments:
18/04/12 22:01:41 INFO tool.ImportTool: --incremental append
18/04/12 22:01:41 INFO tool.ImportTool: --check-column help_keyword_id
18/04/12 22:01:41 INFO tool.ImportTool: --last-value 618
18/04/12 22:01:41 INFO tool.ImportTool: (Consider saving this with 'sqoop job --create')
[hadoop@hadoop3 ~]$