hive中导入csv,本地CSV导入hive表

背景 从mysql 导出的数据保存为csv文件

导出两个文件,一个有csv的文件头 record1.csv

一个没有csv文件的文件头 record0.csv

try1

建表语句直接导入

sql='''

CREATE TABLE IF NOT EXISTS default.records

(

exer_recore_id BIGINT,

user_id int,

channel tinyint,

item_id int,

question_id int,

`type` tinyint,

question_num int,

orgin tinyint,

orgin_id int,

cate_id int,

is_hf tinyint,

user_answer string,

correct_answer string,

is_correct tinyint comment "正确与否",

dutration int,

record_id int,

subject_1 int,

subject_2 int,

subject_3 int,

status tinyint,

is_signed tinyint,

create_time int,

practice_level_1 string,

practice_level_2 string,

practice_level_3 string,

update_time int

)

comment "做题记录"

'''

spark.sql("LOAD DATA LOCAL INPATH '*/Desktop/record0.csv' OVERWRITE INTO TABLE "

"records")

df=spark.sql("select exer_recore_id,user_id,is_correct from records ")

df.show()

df.printSchema()

输出

+--------------+-------+----------+

|exer_recore_id|user_id|is_correct|

+--------------+-------+----------+

| null| null| null|

| null| null| null|

| null| null| null|

| null| null| null|

| null| null| null|

导入失败

try 2

建表语句加入 分隔符

comment "做题记录"

ROW FORMAT delimited fields terminated by ','

res 失败

加入了csv 的分隔符’,'依旧失败

文本查看文件

如下图

05ec5ce25051316d006331a5c6cba1a6.png

try3

comment "做题记录"

ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

WITH SERDEPROPERTIES (

"separatorChar" = ",",

"quoteChar" = "'"

)

success

下一个问题(表头不导入)

加入属性tblproperties(“skip.header.line.count”=“1”)

‘1’ 代表忽略第一行

comment "做题记录"

ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

WITH SERDEPROPERTIES (

"separatorChar" = ",",

"quoteChar" = "'"

)

tblproperties("skip.header.line.count"="1")

解决

again

大意失荆州啊

刚才的输出

+------------------+-------+----------+---------+

| exer_recore_id|user_id|is_correct|record_id|

+------------------+-------+----------+---------+

|"1001110000021295"| "11"| "0"| "109"|

|"1001110000021296"| "11"| "0"| "109"|

|"1001110000021297"| "11"| "1"| "109"|

root

|-- exer_recore_id: string (nullable = true)

|-- user_id: string (nullable = true)

|-- is_correct: string (nullable = true)

|-- record_id: string (nullable = true)

全是字符串格式,还带着“

ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

WITH SERDEPROPERTIES (

"separatorChar" = ",",

"quoteChar" = "\\"",

"escapeChar" = "\\\\"

)

tblproperties("skip.header.line.count"="1")

分隔符是"

输出

+----------------+-------+----------+---------+

| exer_recore_id|user_id|is_correct|record_id|

+----------------+-------+----------+---------+

|1001110000021295| 11| 0| 109|

|1001110000021296| 11| 0| 109|

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值