1. sqoop从oracle导入到HDFS
[hadoop@slave-245 ~]$ sqoop import --append --connect jdbc:oracle:thin:@172.30.1.215:1521:rtt --username RTT --password 123 --target-dir /user/hadoop/test2 -m 4 --table CFG_MOUDLE_LIMIT --columns ID,MOUDLEID,NAME,LIMIT1,LIMIT2 --fields-terminated-by '\t'
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
13/11/29 13:39:57 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/11/29 13:39:57 INFO manager.SqlManager: Using default fetchSize of 1000
13/11/29 13:39:57 INFO tool.CodeGenTool: Beginning code generation
13/11/29 13:40:11 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 13:40:11 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 13:40:11 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop
Note: /tmp/sqoop-hadoop/compile/257e2bf81a0aa9bf8ab5f4e30891ffc0/CFG_MOUDLE_LIMIT.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/11/29 13:40:13 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/257e2bf81a0aa9bf8ab5f4e30891ffc0/CFG_MOUDLE_LIMIT.jar
13/11/29 13:40:13 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 13:40:13 INFO mapreduce.ImportJobBase: Beginning import of CFG_MOUDLE_LIMIT
13/11/29 13:40:16 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(ID), MAX(ID) FROM CFG_MOUDLE_LIMIT
13/11/29 13:40:16 INFO mapred.JobClient: Running job: job_201311261153_0108
13/11/29 13:40:17 INFO mapred.JobClient: map 0% reduce 0%
13/11/29 13:40:25 INFO mapred.JobClient: map 25% reduce 0%
13/11/29 13:40:27 INFO mapred.JobClient: map 50% reduce 0%
13/11/29 13:40:28 INFO mapred.JobClient: map 75% reduce 0%
13/11/29 13:40:30 INFO mapred.JobClient: map 100% reduce 0%
13/11/29 13:40:31 INFO mapred.JobClient: Job complete: job_201311261153_0108
13/11/29 13:40:31 INFO mapred.JobClient: Counters: 18
13/11/29 13:40:31 INFO mapred.JobClient: Job Counters
13/11/29 13:40:31 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=17205
13/11/29 13:40:31 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/29 13:40:31 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/29 13:40:31 INFO mapred.JobClient: Launched map tasks=4
13/11/29 13:40:31 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/11/29 13:40:31 INFO mapred.JobClient: File Output Format Counters
13/11/29 13:40:31 INFO mapred.JobClient: Bytes Written=480
13/11/29 13:40:31 INFO mapred.JobClient: FileSystemCounters
13/11/29 13:40:31 INFO mapred.JobClient: HDFS_BYTES_READ=382
13/11/29 13:40:31 INFO mapred.JobClient: FILE_BYTES_WRITTEN=249012
13/11/29 13:40:31 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=480
13/11/29 13:40:31 INFO mapred.JobClient: File Input Format Counters
13/11/29 13:40:31 INFO mapred.JobClient: Bytes Read=0
13/11/29 13:40:31 INFO mapred.JobClient: Map-Reduce Framework
13/11/29 13:40:31 INFO mapred.JobClient: Map input records=21
13/11/29 13:40:31 INFO mapred.JobClient: Physical memory (bytes) snapshot=452415488
13/11/29 13:40:31 INFO mapred.JobClient: Spilled Records=0
13/11/29 13:40:31 INFO mapred.JobClient: CPU time spent (ms)=5170
13/11/29 13:40:31 INFO mapred.JobClient: Total committed heap usage (bytes)=497942528
13/11/29 13:40:31 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2883989504
13/11/29 13:40:31 INFO mapred.JobClient: Map output records=21
13/11/29 13:40:31 INFO mapred.JobClient: SPLIT_RAW_BYTES=382
13/11/29 13:40:31 INFO mapreduce.ImportJobBase: Transferred 480 bytes in 18.5042 seconds (25.9401 bytes/sec)
13/11/29 13:40:31 INFO mapreduce.ImportJobBase: Retrieved 21 records.
13/11/29 13:40:31 INFO util.AppendUtils: Creating missing output directory - test2
2. sqoop从HDFS导出到oracle
[hadoop@slave-245 ~]$ hadoop fs -cat /user/hadoop/test/*
Warning: $HADOOP_HOME is deprecated.
1 1 FBA 30000 100000
2 2 FBB 30000 100000
3 3 CHECK 30000 100000
4 4 MBA 30000 100000
5 5 MBB 30000 100000
6 6 FUND 30000 100000
7 7 MATCH 30000 100000
8 8 FBA 30000 100000
9 9 FBB 30000 100000
10 10 CHECK 30000 100000
11 11 MBA 30000 100000
12 12 MBB 30000 100000
13 13 FUND 30000 100000
14 14 MATCH 30000 100000
15 15 FBA 30000 100000
16 16 FBB 30000 100000
17 17 CHECK 30000 100000
18 18 MBA 30000 100000
19 19 MBB 30000 100000
20 20 FUND 30000 100000
21 21 MATCH 30000 100000
cat: File does not exist: /user/hadoop/test/_logs
[hadoop@slave-245 ~]$ sqoop export --connect jdbc:oracle:thin:@172.30.1.215:1521:rtt --table TEST -username RTT -password 123 --export-dir /u
ser/hadoop/test --fields-terminated-by '\t'
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
13/11/29 14:04:32 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/11/29 14:04:32 INFO manager.SqlManager: Using default fetchSize of 1000
13/11/29 14:04:32 INFO tool.CodeGenTool: Beginning code generation
13/11/29 14:04:33 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:04:33 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM TEST t WHERE 1=0
13/11/29 14:04:33 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop
Note: /tmp/sqoop-hadoop/compile/d1f3395314bb79d99258cc34613fd730/TEST.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/11/29 14:04:35 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/d1f3395314bb79d99258cc34613fd730/TEST.jar
13/11/29 14:04:35 INFO mapreduce.ExportJobBase: Beginning export of TEST
13/11/29 14:04:36 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:04:39 INFO input.FileInputFormat: Total input paths to process : 2
13/11/29 14:04:39 INFO input.FileInputFormat: Total input paths to process : 2
13/11/29 14:04:39 INFO util.NativeCodeLoader: Loaded the native-hadoop library
13/11/29 14:04:39 WARN snappy.LoadSnappy: Snappy native library not loaded
13/11/29 14:04:40 INFO mapred.JobClient: Running job: job_201311261153_0113
13/11/29 14:04:41 INFO mapred.JobClient: map 0% reduce 0%
13/11/29 14:04:48 INFO mapred.JobClient: map 33% reduce 0%
13/11/29 14:04:49 INFO mapred.JobClient: map 66% reduce 0%
13/11/29 14:04:55 INFO mapred.JobClient: map 100% reduce 0%
13/11/29 14:04:56 INFO mapred.JobClient: Job complete: job_201311261153_0113
13/11/29 14:04:56 INFO mapred.JobClient: Counters: 18
13/11/29 14:04:56 INFO mapred.JobClient: Job Counters
13/11/29 14:04:56 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=14824
13/11/29 14:04:56 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/29 14:04:56 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/29 14:04:56 INFO mapred.JobClient: Launched map tasks=3
13/11/29 14:04:56 INFO mapred.JobClient: Data-local map tasks=3
13/11/29 14:04:56 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/11/29 14:04:56 INFO mapred.JobClient: File Output Format Counters
13/11/29 14:04:56 INFO mapred.JobClient: Bytes Written=0
13/11/29 14:04:56 INFO mapred.JobClient: FileSystemCounters
13/11/29 14:04:56 INFO mapred.JobClient: HDFS_BYTES_READ=1349
13/11/29 14:04:56 INFO mapred.JobClient: FILE_BYTES_WRITTEN=185573
13/11/29 14:04:56 INFO mapred.JobClient: File Input Format Counters
13/11/29 14:04:56 INFO mapred.JobClient: Bytes Read=0
13/11/29 14:04:56 INFO mapred.JobClient: Map-Reduce Framework
13/11/29 14:04:56 INFO mapred.JobClient: Map input records=21
13/11/29 14:04:56 INFO mapred.JobClient: Physical memory (bytes) snapshot=362950656
13/11/29 14:04:56 INFO mapred.JobClient: Spilled Records=0
13/11/29 14:04:56 INFO mapred.JobClient: CPU time spent (ms)=3730
13/11/29 14:04:56 INFO mapred.JobClient: Total committed heap usage (bytes)=426442752
13/11/29 14:04:56 INFO mapred.JobClient: Virtual memory (bytes) snapshot=2162253824
13/11/29 14:04:56 INFO mapred.JobClient: Map output records=21
13/11/29 14:04:56 INFO mapred.JobClient: SPLIT_RAW_BYTES=532
13/11/29 14:04:56 INFO mapreduce.ExportJobBase: Transferred 1.3174 KB in 19.788 seconds (68.1725 bytes/sec)
13/11/29 14:04:56 INFO mapreduce.ExportJobBase: Exported 21 records.
注意:表TEST要在oracle中提前建立,表的字段和hdfs文件的数据库对应,--fields-terminated-by 指定hdfs的分隔符
3.sqoop从oracle导入到HIVE
[hadoop@slave-245 ~]$ sqoop import --connect jdbc:oracle:thin:@172.30.1.215:1521:rtt -username RTT -password 123 --table CFG_MOUDLE_LIMIT --hive-import -m 1 --fields-terminated-by '\t'
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
13/11/29 14:16:43 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/11/29 14:16:43 INFO manager.SqlManager: Using default fetchSize of 1000
13/11/29 14:16:43 INFO tool.CodeGenTool: Beginning code generation
13/11/29 14:16:44 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:16:44 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 14:16:45 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop
Note: /tmp/sqoop-hadoop/compile/75a4b5360d71bafa65fb9a12e6266000/CFG_MOUDLE_LIMIT.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/11/29 14:16:46 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/75a4b5360d71bafa65fb9a12e6266000/CFG_MOUDLE_LIMIT.jar
13/11/29 14:16:46 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:16:46 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:16:46 INFO mapreduce.ImportJobBase: Beginning import of CFG_MOUDLE_LIMIT
13/11/29 14:16:47 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:16:50 INFO mapred.JobClient: Running job: job_201311261153_0114
13/11/29 14:16:51 INFO mapred.JobClient: map 0% reduce 0%
13/11/29 14:16:59 INFO mapred.JobClient: map 100% reduce 0%
13/11/29 14:16:59 INFO mapred.JobClient: Job complete: job_201311261153_0114
13/11/29 14:16:59 INFO mapred.JobClient: Counters: 18
13/11/29 14:16:59 INFO mapred.JobClient: Job Counters
13/11/29 14:16:59 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=6608
13/11/29 14:16:59 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/29 14:16:59 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/29 14:16:59 INFO mapred.JobClient: Launched map tasks=1
13/11/29 14:16:59 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/11/29 14:16:59 INFO mapred.JobClient: File Output Format Counters
13/11/29 14:16:59 INFO mapred.JobClient: Bytes Written=564
13/11/29 14:16:59 INFO mapred.JobClient: FileSystemCounters
13/11/29 14:16:59 INFO mapred.JobClient: HDFS_BYTES_READ=87
13/11/29 14:16:59 INFO mapred.JobClient: FILE_BYTES_WRITTEN=62246
13/11/29 14:16:59 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=564
13/11/29 14:16:59 INFO mapred.JobClient: File Input Format Counters
13/11/29 14:16:59 INFO mapred.JobClient: Bytes Read=0
13/11/29 14:16:59 INFO mapred.JobClient: Map-Reduce Framework
13/11/29 14:16:59 INFO mapred.JobClient: Map input records=21
13/11/29 14:16:59 INFO mapred.JobClient: Physical memory (bytes) snapshot=118628352
13/11/29 14:16:59 INFO mapred.JobClient: Spilled Records=0
13/11/29 14:16:59 INFO mapred.JobClient: CPU time spent (ms)=1210
13/11/29 14:16:59 INFO mapred.JobClient: Total committed heap usage (bytes)=151650304
13/11/29 14:16:59 INFO mapred.JobClient: Virtual memory (bytes) snapshot=610611200
13/11/29 14:16:59 INFO mapred.JobClient: Map output records=21
13/11/29 14:16:59 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/11/29 14:16:59 INFO mapreduce.ImportJobBase: Transferred 564 bytes in 12.6707 seconds (44.5121 bytes/sec)
13/11/29 14:16:59 INFO mapreduce.ImportJobBase: Retrieved 21 records.
13/11/29 14:17:00 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:17:00 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 14:17:00 WARN hive.TableDefWriter: Column ID had to be cast to a less precise type in Hive
13/11/29 14:17:00 WARN hive.TableDefWriter: Column MOUDLEID had to be cast to a less precise type in Hive
13/11/29 14:17:00 WARN hive.TableDefWriter: Column LIMIT1 had to be cast to a less precise type in Hive
13/11/29 14:17:00 WARN hive.TableDefWriter: Column LIMIT2 had to be cast to a less precise type in Hive
13/11/29 14:17:00 WARN hive.TableDefWriter: Column BOSTYPE had to be cast to a less precise type in Hive
13/11/29 14:17:00 INFO hive.HiveImport: Removing temporary files from import process: hdfs://slave-245:9000/user/hadoop/CFG_MOUDLE_LIMIT/_logs
13/11/29 14:17:00 INFO hive.HiveImport: Loading uploaded data into Hive
13/11/29 14:17:03 INFO hive.HiveImport:
13/11/29 14:17:03 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/usr/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
13/11/29 14:17:03 INFO hive.HiveImport: Hive history file=/usr/hive/logs/hive_job_log_hadoop_3137@slave-245_201311291417_517875869.txt
13/11/29 14:17:11 INFO hive.HiveImport: OK
13/11/29 14:17:11 INFO hive.HiveImport: Time taken: 7.485 seconds
13/11/29 14:17:11 INFO hive.HiveImport: Loading data to table default.cfg_moudle_limit
13/11/29 14:17:12 INFO hive.HiveImport: Table default.cfg_moudle_limit stats: [num_partitions: 0, num_files: 2, num_rows: 0, total_size: 564, raw_data_size: 0]
13/11/29 14:17:12 INFO hive.HiveImport: OK
13/11/29 14:17:12 INFO hive.HiveImport: Time taken: 1.188 seconds
13/11/29 14:17:12 INFO hive.HiveImport: Hive import complete.
13/11/29 14:17:12 INFO hive.HiveImport: Export directory is empty, removing it.
[hadoop@slave-245 ~]$ hive
Logging initialized using configuration in jar:file:/usr/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/usr/hive/logs/hive_job_log_hadoop_3251@slave-245_201311291417_1266708350.txt
hive> show tables;
OK
cfg_moudle_limit
fact_s0d5
s0d51
s0d_partition_180days
Time taken: 5.376 seconds, Fetched: 4 row(s)
hive> select * from cfg_moudle_limit;
OK
1.0 1.0 FBA N 30000.0 100000.0 0.0
2.0 2.0 FBB N 30000.0 100000.0 0.0
3.0 3.0 CHECK N 30000.0 100000.0 0.0
4.0 4.0 MBA N 30000.0 100000.0 0.0
5.0 5.0 MBB N 30000.0 100000.0 0.0
6.0 6.0 FUND N 30000.0 100000.0 0.0
7.0 7.0 MATCH N 30000.0 100000.0 0.0
8.0 8.0 FBA N 30000.0 100000.0 1.0
9.0 9.0 FBB N 30000.0 100000.0 1.0
10.0 10.0 CHECK N 30000.0 100000.0 1.0
11.0 11.0 MBA N 30000.0 100000.0 1.0
12.0 12.0 MBB N 30000.0 100000.0 1.0
13.0 13.0 FUND N 30000.0 100000.0 1.0
14.0 14.0 MATCH N 30000.0 100000.0 1.0
15.0 15.0 FBA N 30000.0 100000.0 2.0
16.0 16.0 FBB N 30000.0 100000.0 2.0
17.0 17.0 CHECK N 30000.0 100000.0 2.0
18.0 18.0 MBA N 30000.0 100000.0 2.0
19.0 19.0 MBB N 30000.0 100000.0 2.0
20.0 20.0 FUND N 30000.0 100000.0 2.0
21.0 21.0 MATCH N 30000.0 100000.0 2.0
Time taken: 1.114 seconds, Fetched: 21 row(s)
hive>
指定hive的表名导入
[hadoop@slave-245 ~]$ sqoop import --connect jdbc:oracle:thin:@172.30.1.215:1521:rtt --username RTT --password 123 --table CFG_MOUDLE_LIMIT --columns ID,MOUDLEID,NAME,LIMIT1,LIMIT2 -m 1 --fields-terminated-by '\t' --hive-drop-import-delims --hive-import --hive-overwrite --hive-table info
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
13/11/29 14:33:13 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/11/29 14:33:13 WARN tool.BaseSqoopTool: It seems that you've specified at least one of following:
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --hive-home
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --hive-overwrite
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --create-hive-table
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --hive-table
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --hive-partition-key
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --hive-partition-value
13/11/29 14:33:13 WARN tool.BaseSqoopTool: --map-column-hive
13/11/29 14:33:13 WARN tool.BaseSqoopTool: Without specifying parameter --hive-import. Please note that
13/11/29 14:33:13 WARN tool.BaseSqoopTool: those arguments will not be used in this session. Either
13/11/29 14:33:13 WARN tool.BaseSqoopTool: specify --hive-import to apply them correctly or remove them
13/11/29 14:33:13 WARN tool.BaseSqoopTool: from command line to remove this warning.
13/11/29 14:33:13 INFO tool.BaseSqoopTool: Please note that --hive-home, --hive-partition-key,
13/11/29 14:33:13 INFO tool.BaseSqoopTool: hive-partition-value and --map-column-hive options are
13/11/29 14:33:13 INFO tool.BaseSqoopTool: are also valid for HCatalog imports and exports
13/11/29 14:33:13 INFO manager.SqlManager: Using default fetchSize of 1000
13/11/29 14:33:13 INFO tool.CodeGenTool: Beginning code generation
13/11/29 14:33:14 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:33:14 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 14:33:14 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop
Note: /tmp/sqoop-hadoop/compile/26319ba756bb4bb235a2c5733512b4e5/CFG_MOUDLE_LIMIT.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/11/29 14:33:16 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/26319ba756bb4bb235a2c5733512b4e5/CFG_MOUDLE_LIMIT.jar
13/11/29 14:33:16 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:33:16 INFO mapreduce.ImportJobBase: Beginning import of CFG_MOUDLE_LIMIT
13/11/29 14:33:20 INFO mapred.JobClient: Running job: job_201311261153_0116
13/11/29 14:33:21 INFO mapred.JobClient: map 0% reduce 0%
13/11/29 14:33:29 INFO mapred.JobClient: map 100% reduce 0%
13/11/29 14:33:30 INFO mapred.JobClient: Job complete: job_201311261153_0116
13/11/29 14:33:30 INFO mapred.JobClient: Counters: 18
13/11/29 14:33:30 INFO mapred.JobClient: Job Counters
13/11/29 14:33:30 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=7089
13/11/29 14:33:30 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/29 14:33:30 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/29 14:33:30 INFO mapred.JobClient: Launched map tasks=1
13/11/29 14:33:30 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/11/29 14:33:30 INFO mapred.JobClient: File Output Format Counters
13/11/29 14:33:30 INFO mapred.JobClient: Bytes Written=480
13/11/29 14:33:30 INFO mapred.JobClient: FileSystemCounters
13/11/29 14:33:30 INFO mapred.JobClient: HDFS_BYTES_READ=87
13/11/29 14:33:30 INFO mapred.JobClient: FILE_BYTES_WRITTEN=62239
13/11/29 14:33:30 INFO mapred.JobClient: HDFS_BYTES_WRITTEN=480
13/11/29 14:33:30 INFO mapred.JobClient: File Input Format Counters
13/11/29 14:33:30 INFO mapred.JobClient: Bytes Read=0
13/11/29 14:33:30 INFO mapred.JobClient: Map-Reduce Framework
13/11/29 14:33:30 INFO mapred.JobClient: Map input records=21
13/11/29 14:33:30 INFO mapred.JobClient: Physical memory (bytes) snapshot=114044928
13/11/29 14:33:30 INFO mapred.JobClient: Spilled Records=0
13/11/29 14:33:30 INFO mapred.JobClient: CPU time spent (ms)=1330
13/11/29 14:33:30 INFO mapred.JobClient: Total committed heap usage (bytes)=123600896
13/11/29 14:33:30 INFO mapred.JobClient: Virtual memory (bytes) snapshot=789786624
13/11/29 14:33:30 INFO mapred.JobClient: Map output records=21
13/11/29 14:33:30 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/11/29 14:33:30 INFO mapreduce.ImportJobBase: Transferred 480 bytes in 13.7191 seconds (34.9878 bytes/sec)
13/11/29 14:33:30 INFO mapreduce.ImportJobBase: Retrieved 21 records.
13/11/29 14:33:30 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:33:30 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 14:33:30 WARN hive.TableDefWriter: Column ID had to be cast to a less precise type in Hive
13/11/29 14:33:30 WARN hive.TableDefWriter: Column MOUDLEID had to be cast to a less precise type in Hive
13/11/29 14:33:30 WARN hive.TableDefWriter: Column LIMIT1 had to be cast to a less precise type in Hive
13/11/29 14:33:30 WARN hive.TableDefWriter: Column LIMIT2 had to be cast to a less precise type in Hive
13/11/29 14:33:30 INFO hive.HiveImport: Removing temporary files from import process: hdfs://slave-245:9000/user/hadoop/CFG_MOUDLE_LIMIT/_logs
13/11/29 14:33:30 INFO hive.HiveImport: Loading uploaded data into Hive
13/11/29 14:33:33 INFO hive.HiveImport:
13/11/29 14:33:33 INFO hive.HiveImport: Logging initialized using configuration in jar:file:/usr/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
13/11/29 14:33:33 INFO hive.HiveImport: Hive history file=/usr/hive/logs/hive_job_log_hadoop_4247@slave-245_201311291433_1434673725.txt
13/11/29 14:33:39 INFO hive.HiveImport: OK
13/11/29 14:33:39 INFO hive.HiveImport: Time taken: 5.868 seconds
13/11/29 14:33:39 INFO hive.HiveImport: Loading data to table default.info
13/11/29 14:33:40 INFO hive.HiveImport: Deleted hdfs://slave-245:9000/user/hive/warehouse/info
13/11/29 14:33:40 INFO hive.HiveImport: Table default.info stats: [num_partitions: 0, num_files: 2, num_rows: 0, total_size: 480, raw_data_size: 0]
13/11/29 14:33:40 INFO hive.HiveImport: OK
13/11/29 14:33:40 INFO hive.HiveImport: Time taken: 0.818 seconds
13/11/29 14:33:40 INFO hive.HiveImport: Hive import complete.
[hadoop@slave-245 ~]$ hive
Logging initialized using configuration in jar:file:/usr/hive/lib/hive-common-0.11.0.jar!/hive-log4j.properties
Hive history file=/usr/hive/logs/hive_job_log_hadoop_4360@slave-245_201311291433_30571456.txt
hive> show tables;
OK
cfg_moudle_limit
fact_s0d5
info
s0d51
s0d52
s0d_partition_180days
Time taken: 5.337 seconds, Fetched: 6 row(s)
hive> select * from info;
OK
1.0 1.0 FBA 30000.0 100000.0
2.0 2.0 FBB 30000.0 100000.0
3.0 3.0 CHECK 30000.0 100000.0
4.0 4.0 MBA 30000.0 100000.0
5.0 5.0 MBB 30000.0 100000.0
6.0 6.0 FUND 30000.0 100000.0
7.0 7.0 MATCH 30000.0 100000.0
8.0 8.0 FBA 30000.0 100000.0
9.0 9.0 FBB 30000.0 100000.0
10.0 10.0 CHECK 30000.0 100000.0
11.0 11.0 MBA 30000.0 100000.0
12.0 12.0 MBB 30000.0 100000.0
13.0 13.0 FUND 30000.0 100000.0
14.0 14.0 MATCH 30000.0 100000.0
15.0 15.0 FBA 30000.0 100000.0
16.0 16.0 FBB 30000.0 100000.0
17.0 17.0 CHECK 30000.0 100000.0
18.0 18.0 MBA 30000.0 100000.0
19.0 19.0 MBB 30000.0 100000.0
20.0 20.0 FUND 30000.0 100000.0
21.0 21.0 MATCH 30000.0 100000.0
Time taken: 1.093 seconds, Fetched: 21 row(s)
hive>
4.sqoop从oracle导入到hbase
[hadoop@slave-245 ~]$ sqoop import --connect jdbc:oracle:thin:@172.30.1.215:1521:rtt --username RTT --password 123 --table CFG_MOUDLE_LIMIT --columns ID,MOUDLEID,NAME,LIMIT1,LIMIT2 -m 1 --fie
MIT1,LIMIT2 -m 1 --fields-terminated-by '\t' --hbase-create-table --hbase-table info --hbase-row-key ID --column-family info
Warning: /usr/lib/hcatalog does not exist! HCatalog jobs will fail.
Please set $HCAT_HOME to the root of your HCatalog installation.
Warning: $HADOOP_HOME is deprecated.
13/11/29 14:40:44 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead.
13/11/29 14:40:44 INFO manager.SqlManager: Using default fetchSize of 1000
13/11/29 14:40:44 INFO tool.CodeGenTool: Beginning code generation
13/11/29 14:40:45 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:40:45 INFO manager.SqlManager: Executing SQL statement: SELECT t.* FROM CFG_MOUDLE_LIMIT t WHERE 1=0
13/11/29 14:40:45 INFO orm.CompilationManager: HADOOP_MAPRED_HOME is /usr/hadoop
Note: /tmp/sqoop-hadoop/compile/9e371755e13c1125305baa07b4fb1703/CFG_MOUDLE_LIMIT.java uses or overrides a deprecated API.
Note: Recompile with -Xlint:deprecation for details.
13/11/29 14:40:47 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-hadoop/compile/9e371755e13c1125305baa07b4fb1703/CFG_MOUDLE_LIMIT.jar
13/11/29 14:40:47 INFO manager.OracleManager: Time zone has been set to GMT
13/11/29 14:40:47 INFO mapreduce.ImportJobBase: Beginning import of CFG_MOUDLE_LIMIT
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:host.name=slave-245
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.version=1.7.0_25
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.vendor=Oracle Corporation
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.home=/usr/java/jdk1.7.0_25/jre
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.class.path=/usr/hadoop/libexec/../conf:/usr/java/jdk1.7.0_25/lib/tools.jar:/usr/hadoop/libexec/..:/usr/hadoop/libexec/../hadoop-core-1.1.2.jar:/usr/hadoop/libexec/../lib/asm-3.2.jar:/usr/hadoop/libexec/../lib/aspectjrt-1.6.11.jar:/usr/hadoop/libexec/../lib/aspectjtools-1.6.11.jar:/usr/hadoop/libexec/../lib/commons-beanutils-1.7.0.jar:/usr/hadoop/libexec/../lib/commons-beanutils-core-1.8.0.jar:/usr/hadoop/libexec/../lib/commons-cli-1.2.jar:/usr/hadoop/libexec/../lib/commons-codec-1.4.jar:/usr/hadoop/libexec/../lib/commons-collections-3.2.1.jar:/usr/hadoop/libexec/../lib/commons-configuration-1.6.jar:/usr/hadoop/libexec/../lib/commons-daemon-1.0.1.jar:/usr/hadoop/libexec/../lib/commons-digester-1.8.jar:/usr/hadoop/libexec/../lib/commons-el-1.0.jar:/usr/hadoop/libexec/../lib/commons-httpclient-3.0.1.jar:/usr/hadoop/libexec/../lib/commons-io-2.1.jar:/usr/hadoop/libexec/../lib/commons-lang-2.4.jar:/usr/hadoop/libexec/../lib/commons-logging-1.1.1.jar:/usr/hadoop/libexec/../lib/commons-logging-api-1.0.4.jar:/usr/hadoop/libexec/../lib/commons-math-2.1.jar:/usr/hadoop/libexec/../lib/commons-net-3.1.jar:/usr/hadoop/libexec/../lib/core-3.1.1.jar:/usr/hadoop/libexec/../lib/guava-11.0.2.jar:/usr/hadoop/libexec/../lib/hadoop-capacity-scheduler-1.1.2.jar:/usr/hadoop/libexec/../lib/hadoop-fairscheduler-1.1.2.jar:/usr/hadoop/libexec/../lib/hadoop-thriftfs-1.1.2.jar:/usr/hadoop/libexec/../lib/hbase-0.94.9.jar:/usr/hadoop/libexec/../lib/hsqldb-1.8.0.10.jar:/usr/hadoop/libexec/../lib/jackson-core-asl-1.8.8.jar:/usr/hadoop/libexec/../lib/jackson-mapper-asl-1.8.8.jar:/usr/hadoop/libexec/../lib/jasper-compiler-5.5.12.jar:/usr/hadoop/libexec/../lib/jasper-runtime-5.5.12.jar:/usr/hadoop/libexec/../lib/jdeb-0.8.jar:/usr/hadoop/libexec/../lib/jersey-core-1.8.jar:/usr/hadoop/libexec/../lib/jersey-json-1.8.jar:/usr/hadoop/libexec/../lib/jersey-server-1.8.jar:/usr/hadoop/libexec/../lib/jets3t-0.6.1.jar:/usr/hadoop/libexec/../lib/jetty-6.1.26.jar:/usr/hadoop/libexec/../lib/jetty-util-6.1.26.jar:/usr/hadoop/libexec/../lib/jsch-0.1.42.jar:/usr/hadoop/libexec/../lib/junit-4.5.jar:/usr/hadoop/libexec/../lib/kfs-0.2.2.jar:/usr/hadoop/libexec/../lib/log4j-1.2.15.jar:/usr/hadoop/libexec/../lib/mockito-all-1.8.5.jar:/usr/hadoop/libexec/../lib/mysql-connector-java-5.1.6-bin.jar:/usr/hadoop/libexec/../lib/ojdbc6.jar:/usr/hadoop/libexec/../lib/oro-2.0.8.jar:/usr/hadoop/libexec/../lib/protobuf-java-2.4.0a.jar:/usr/hadoop/libexec/../lib/servlet-api-2.5-20081211.jar:/usr/hadoop/libexec/../lib/slf4j-api-1.4.3.jar:/usr/hadoop/libexec/../lib/slf4j-log4j12-1.4.3.jar:/usr/hadoop/libexec/../lib/sqoop-1.4.4.jar:/usr/hadoop/libexec/../lib/xmlenc-0.52.jar:/usr/hadoop/libexec/../lib/zookeeper-3.4.5.jar:/usr/hadoop/libexec/../lib/jsp-2.1/jsp-2.1.jar:/usr/hadoop/libexec/../lib/jsp-2.1/jsp-api-2.1.jar:.::/usr/java/jdk1.7.0_25/lib:/usr/java/jdk1.7.0_25/jre/lib:/usr/hadoop/bin
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/usr/hadoop/libexec/../lib/native/Linux-amd64-64
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-358.el6.x86_64
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:user.name=hadoop
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:user.home=/home/hadoop
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
13/11/29 14:40:48 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Test-230:2222,slave-245:2222,RTT215:2222 sessionTimeout=180000 watcher=hconnection
13/11/29 14:40:48 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4506@slave-245
13/11/29 14:40:48 INFO zookeeper.ClientCnxn: Opening socket connection to server RTT215/172.30.1.215:2222. Will not attempt to authenticate using SASL (unknown error)
13/11/29 14:40:48 INFO zookeeper.ClientCnxn: Socket connection established to RTT215/172.30.1.215:2222, initiating session
13/11/29 14:40:48 INFO zookeeper.ClientCnxn: Session establishment complete on server RTT215/172.30.1.215:2222, sessionid = 0x142928a6be80006, negotiated timeout = 180000
13/11/29 14:40:49 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Test-230:2222,slave-245:2222,RTT215:2222 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1078e8b8
13/11/29 14:40:49 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4506@slave-245
13/11/29 14:40:49 INFO zookeeper.ClientCnxn: Opening socket connection to server RTT215/172.30.1.215:2222. Will not attempt to authenticate using SASL (unknown error)
13/11/29 14:40:49 INFO zookeeper.ClientCnxn: Socket connection established to RTT215/172.30.1.215:2222, initiating session
13/11/29 14:40:49 INFO zookeeper.ClientCnxn: Session establishment complete on server RTT215/172.30.1.215:2222, sessionid = 0x142928a6be80007, negotiated timeout = 180000
13/11/29 14:40:49 INFO zookeeper.ZooKeeper: Session: 0x142928a6be80007 closed
13/11/29 14:40:49 INFO zookeeper.ClientCnxn: EventThread shut down
13/11/29 14:40:49 INFO mapreduce.HBaseImportJob: Creating missing HBase table info
13/11/29 14:40:50 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Test-230:2222,slave-245:2222,RTT215:2222 sessionTimeout=180000 watcher=catalogtracker-on-org.apache.hadoop.hbase.client.HConnectionManager$HConnectionImplementation@1078e8b8
13/11/29 14:40:50 INFO zookeeper.RecoverableZooKeeper: The identifier of this process is 4506@slave-245
13/11/29 14:40:50 INFO zookeeper.ClientCnxn: Opening socket connection to server RTT215/172.30.1.215:2222. Will not attempt to authenticate using SASL (unknown error)
13/11/29 14:40:50 INFO zookeeper.ClientCnxn: Socket connection established to RTT215/172.30.1.215:2222, initiating session
13/11/29 14:40:50 INFO zookeeper.ClientCnxn: Session establishment complete on server RTT215/172.30.1.215:2222, sessionid = 0x142928a6be80008, negotiated timeout = 180000
13/11/29 14:40:50 INFO zookeeper.ClientCnxn: EventThread shut down
13/11/29 14:40:50 INFO zookeeper.ZooKeeper: Session: 0x142928a6be80008 closed
13/11/29 14:40:55 INFO mapred.JobClient: Running job: job_201311261153_0117
13/11/29 14:40:56 INFO mapred.JobClient: map 0% reduce 0%
13/11/29 14:41:07 INFO mapred.JobClient: map 100% reduce 0%
13/11/29 14:41:08 INFO mapred.JobClient: Job complete: job_201311261153_0117
13/11/29 14:41:08 INFO mapred.JobClient: Counters: 17
13/11/29 14:41:08 INFO mapred.JobClient: Job Counters
13/11/29 14:41:08 INFO mapred.JobClient: SLOTS_MILLIS_MAPS=6985
13/11/29 14:41:08 INFO mapred.JobClient: Total time spent by all reduces waiting after reserving slots (ms)=0
13/11/29 14:41:08 INFO mapred.JobClient: Total time spent by all maps waiting after reserving slots (ms)=0
13/11/29 14:41:08 INFO mapred.JobClient: Launched map tasks=1
13/11/29 14:41:08 INFO mapred.JobClient: SLOTS_MILLIS_REDUCES=0
13/11/29 14:41:08 INFO mapred.JobClient: File Output Format Counters
13/11/29 14:41:08 INFO mapred.JobClient: Bytes Written=0
13/11/29 14:41:08 INFO mapred.JobClient: FileSystemCounters
13/11/29 14:41:08 INFO mapred.JobClient: HDFS_BYTES_READ=87
13/11/29 14:41:08 INFO mapred.JobClient: FILE_BYTES_WRITTEN=84777
13/11/29 14:41:08 INFO mapred.JobClient: File Input Format Counters
13/11/29 14:41:08 INFO mapred.JobClient: Bytes Read=0
13/11/29 14:41:08 INFO mapred.JobClient: Map-Reduce Framework
13/11/29 14:41:08 INFO mapred.JobClient: Map input records=21
13/11/29 14:41:08 INFO mapred.JobClient: Physical memory (bytes) snapshot=140988416
13/11/29 14:41:08 INFO mapred.JobClient: Spilled Records=0
13/11/29 14:41:08 INFO mapred.JobClient: CPU time spent (ms)=2610
13/11/29 14:41:08 INFO mapred.JobClient: Total committed heap usage (bytes)=150536192
13/11/29 14:41:08 INFO mapred.JobClient: Virtual memory (bytes) snapshot=808865792
13/11/29 14:41:08 INFO mapred.JobClient: Map output records=21
13/11/29 14:41:08 INFO mapred.JobClient: SPLIT_RAW_BYTES=87
13/11/29 14:41:08 INFO mapreduce.ImportJobBase: Transferred 0 bytes in 16.9658 seconds (0 bytes/sec)
13/11/29 14:41:08 INFO mapreduce.ImportJobBase: Retrieved 21 records.
[hadoop@slave-245 ~]$ hbase shell
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 0.94.9, r1496217, Mon Jun 24 20:57:30 UTC 2013
hbase(main):001:0> list
TABLE
aaa
blog
hbase_dest
hbase_dest_10mins
hbase_dest_1day
hbase_dest_1hour
hbase_dest_2days
hbase_dest_2hours
hbase_dest_30mins
hbase_dest_5days
hbase_dest_5mins
info
s0d5_180days
s0de_180Days
tag_friend
test
16 row(s) in 1.4800 seconds
hbase(main):002:0> scan 'info'
ROW COLUMN+CELL
1 column=info:LIMIT1, timestamp=1385707257145, value=30000
1 column=info:LIMIT2, timestamp=1385707257145, value=100000
1 column=info:MOUDLEID, timestamp=1385707257145, value=1
1 column=info:NAME, timestamp=1385707257145, value=FBA
10 column=info:LIMIT1, timestamp=1385707257145, value=30000
10 column=info:LIMIT2, timestamp=1385707257145, value=100000
10 column=info:MOUDLEID, timestamp=1385707257145, value=10
10 column=info:NAME, timestamp=1385707257145, value=CHECK
11 column=info:LIMIT1, timestamp=1385707257145, value=30000
11 column=info:LIMIT2, timestamp=1385707257145, value=100000
11 column=info:MOUDLEID, timestamp=1385707257145, value=11
11 column=info:NAME, timestamp=1385707257145, value=MBA
12 column=info:LIMIT1, timestamp=1385707257145, value=30000
12 column=info:LIMIT2, timestamp=1385707257145, value=100000
12 column=info:MOUDLEID, timestamp=1385707257145, value=12
12 column=info:NAME, timestamp=1385707257145, value=MBB
13 column=info:LIMIT1, timestamp=1385707257145, value=30000
13 column=info:LIMIT2, timestamp=1385707257145, value=100000
13 column=info:MOUDLEID, timestamp=1385707257145, value=13
13 column=info:NAME, timestamp=1385707257145, value=FUND
14 column=info:LIMIT1, timestamp=1385707257145, value=30000
14 column=info:LIMIT2, timestamp=1385707257145, value=100000
14 column=info:MOUDLEID, timestamp=1385707257145, value=14
14 column=info:NAME, timestamp=1385707257145, value=MATCH
15 column=info:LIMIT1, timestamp=1385707257145, value=30000
15 column=info:LIMIT2, timestamp=1385707257145, value=100000
15 column=info:MOUDLEID, timestamp=1385707257145, value=15
15 column=info:NAME, timestamp=1385707257145, value=FBA
16 column=info:LIMIT1, timestamp=1385707257145, value=30000
16 column=info:LIMIT2, timestamp=1385707257145, value=100000
16 column=info:MOUDLEID, timestamp=1385707257145, value=16
16 column=info:NAME, timestamp=1385707257145, value=FBB
17 column=info:LIMIT1, timestamp=1385707257145, value=30000
17 column=info:LIMIT2, timestamp=1385707257145, value=100000
17 column=info:MOUDLEID, timestamp=1385707257145, value=17
17 column=info:NAME, timestamp=1385707257145, value=CHECK
18 column=info:LIMIT1, timestamp=1385707257145, value=30000
18 column=info:LIMIT2, timestamp=1385707257145, value=100000
18 column=info:MOUDLEID, timestamp=1385707257145, value=18
18 column=info:NAME, timestamp=1385707257145, value=MBA
19 column=info:LIMIT1, timestamp=1385707257145, value=30000
19 column=info:LIMIT2, timestamp=1385707257145, value=100000
19 column=info:MOUDLEID, timestamp=1385707257145, value=19
19 column=info:NAME, timestamp=1385707257145, value=MBB
2 column=info:LIMIT1, timestamp=1385707257145, value=30000
2 column=info:LIMIT2, timestamp=1385707257145, value=100000
2 column=info:MOUDLEID, timestamp=1385707257145, value=2
2 column=info:NAME, timestamp=1385707257145, value=FBB
20 column=info:LIMIT1, timestamp=1385707257145, value=30000
20 column=info:LIMIT2, timestamp=1385707257145, value=100000
20 column=info:MOUDLEID, timestamp=1385707257145, value=20
20 column=info:NAME, timestamp=1385707257145, value=FUND
21 column=info:LIMIT1, timestamp=1385707257145, value=30000
21 column=info:LIMIT2, timestamp=1385707257145, value=100000
21 column=info:MOUDLEID, timestamp=1385707257145, value=21
21 column=info:NAME, timestamp=1385707257145, value=MATCH
3 column=info:LIMIT1, timestamp=1385707257145, value=30000
3 column=info:LIMIT2, timestamp=1385707257145, value=100000
3 column=info:MOUDLEID, timestamp=1385707257145, value=3
3 column=info:NAME, timestamp=1385707257145, value=CHECK
4 column=info:LIMIT1, timestamp=1385707257145, value=30000
4 column=info:LIMIT2, timestamp=1385707257145, value=100000
4 column=info:MOUDLEID, timestamp=1385707257145, value=4
4 column=info:NAME, timestamp=1385707257145, value=MBA
5 column=info:LIMIT1, timestamp=1385707257145, value=30000
5 column=info:LIMIT2, timestamp=1385707257145, value=100000
5 column=info:MOUDLEID, timestamp=1385707257145, value=5
5 column=info:NAME, timestamp=1385707257145, value=MBB
6 column=info:LIMIT1, timestamp=1385707257145, value=30000
6 column=info:LIMIT2, timestamp=1385707257145, value=100000
6 column=info:MOUDLEID, timestamp=1385707257145, value=6
6 column=info:NAME, timestamp=1385707257145, value=FUND
7 column=info:LIMIT1, timestamp=1385707257145, value=30000
7 column=info:LIMIT2, timestamp=1385707257145, value=100000
7 column=info:MOUDLEID, timestamp=1385707257145, value=7
7 column=info:NAME, timestamp=1385707257145, value=MATCH
8 column=info:LIMIT1, timestamp=1385707257145, value=30000
8 column=info:LIMIT2, timestamp=1385707257145, value=100000
8 column=info:MOUDLEID, timestamp=1385707257145, value=8
8 column=info:NAME, timestamp=1385707257145, value=FBA
9 column=info:LIMIT1, timestamp=1385707257145, value=30000
9 column=info:LIMIT2, timestamp=1385707257145, value=100000
9 column=info:MOUDLEID, timestamp=1385707257145, value=9
9 column=info:NAME, timestamp=1385707257145, value=FBB
21 row(s) in 0.7160 seconds
hbase(main):003:0>
注意:--hbase-create-table 表示hbase创建表, --hbase-table table 指定表名 ,--hbase-row-key xxx 指定行健名, --column-family or1 指定列族名