insert overwrite local directory '/url/lxb/hive'
row format delimited
fields terminated by ','
select * from table_name limit 100
hive -e "set hive.cli.print.header=true; select * from table_name where some_query_conditions" | sed 's/[\t]/,/g' > test.csv
set hive.cli.print.header=true 将表头输出
sed ‘s/[\t]/,/g’ 将\t 替换成,(逗号分隔)
spark-shell
val df = spark.sql("select * from test.student3")
df.write.csv("/HDFS目录")
hadoop fs -get /HDFS目录 XXX
注意这里是 HDFS目录 ,spark会在目录下生成很多小的csv文件,导出后需要使用 cat *.csv > one.csv 来合并