本地CSV文件测试:
2. 创建表
在phoenix的CLI界面创建user表。
> create table CSV_BULK_LOAD (id varchar primary key,account varchar ,passwd varchar);
3. 添加测试数据
在【PHOENIX_HOME】目录下创建data.csv,内容如下:
001,google,AM
002,baidu,BJ
003,alibaba,HZ
4.在phoenix目录中执行:
bin/psql.py -t CSV_BULK_LOAD test/data.csv
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/home/phoenix4.9.0/phoenix-4.9.0-HBase-1.2-client.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/home/hadoop2.73/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
17/08/16 16:56:28 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
csv columns from database.
CSV Upsert complete. 3 rows upserted
Time: 0.086 sec(s)
5.查看
0: jdbc:phoenix:> select * from CSV_BULK_LOAD
+------+----------+---------+
| ID | ACCOUNT | PASSWD |
+------+----------+---------+
| 001 | google | AM |
| 002 | baidu | BJ |
| 003 | alibaba | HZ |
+------+----------+---------+
3 rows selected (0.083 seconds)
6.
利用MapReduce插入普通文件
data2.txt
004,aaagoogle,EEEEAM
005,dddddbaidu,EEEEBJ
006,cccccalibaba,EEEEHZ
7.在phoenix目录下 (
data2.txt也需要在同一个目录,不能有子目录,不知道为啥
)
[root@slaver3 phoenix4.9.0]# HADOOP_CLASSPATH=/home/hbase1.25/lib/hbase-protocol-1.2.5.jar:/home/hbase1.25/conf hadoop jar phoenix-4.9.0-HBase-1.2-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -t CSV_BULK_LOAD -i file:/home/phoenix4.9.0/data2.txt -z 172.16.11.221,172.16.11.222,172.16.11.223:2800
-z 如果不专门指定 默认访问的端口是2181 ,当然端口值看自己hadoop hbase里是多少
MapReduce加载器可以使用以下参数。
参数 | 描述 |
-i,-input | 输入CSV路径(必填) |
-t, - table | 凤凰台名(强制性) |
-a, - array-delimiter | 数组元素分隔符(可选) |
-c,-import-columns | 要导入的列的逗号分隔列表 |
-d,-delimiter | 输入分隔符,默认为逗号 |
-g,-ignore-errors | 忽略输入错误 |
-o,-output | 临时HFiles的输出路径(可选) |
-s,-schema | 凤凰模式名称(可选) |
-z,-zookeeper | Zookeeper quorum连接(可选) |
- it,-index-table | 要加载的索引表名称(可选) |
8.查看:
0: jdbc:phoenix:> select * from CSV_BULK_LOAD
+------+---------------+----------+
| ID | ACCOUNT | PASSWD |
+------+---------------+----------+
| 001 | google | AM |
| 002 | baidu | BJ |
| 003 | alibaba | HZ |
| 004 | aaagoogle | EEEEAM |
| 005 | dddddbaidu | EEEEBJ |
| 006 | cccccalibaba | EEEEHZ |
+------+---------------+----------+
9 MapReduce importer 导入Hfiles
data3.csv
007,MMMMgoogle,HHHHHAM
008,AAAAbaidu,EEEEEBJ
009,CCCCalibaba,NNNNNHZ
10.利用 importer 导入
[root@slaver3 phoenix4.9.0]# hadoop jar phoenix-4.9.0-HBase-1.2-client.jar org.apache.phoenix.mapreduce.CsvBulkLoadTool -t CSV_BULK_LOAD -i /Phoenix/data3.csv -z 172.16.11.221,172.16.11.222,172.16.11.223:2800
11.查看:
0: jdbc:phoenix:> select * from CSV_BULK_LOAD;
+------+---------------+-----------+
| ID | ACCOUNT | PASSWD |
+------+---------------+-----------+
| 001 | google | AM |
| 002 | baidu | BJ |
| 003 | alibaba | HZ |
| 004 | aaagoogle | EEEEAM |
| 005 | dddddbaidu | EEEEBJ |
| 006 | cccccalibaba | EEEEHZ |
| 007 | MMMMgoogle | HHHHHAM |
| 008 | AAAAbaidu | EEEEEBJ |
| 009 | CCCCalibaba | NNNNNHZ |
+------+---------------+-----------+