Spark dataframe与list的转换(多行变一行)

1. 将dataframe某一列变为list

import spark.implicits._
var data_csv = Seq(
  ("ke,sun"),
  ("tian,sun")
).toDF("CST_NO")
    
+--------+
|  CST_NO|
+--------+
|  ke,sun|
|tian,sun|
+--------+

将CST_NO 列

var neg_tmp = data_tmp.select("CST_NO").collect().map(_(0)).toList
println(neg_tmp.length)

// 取第一行 neg_tmp(0)
var neg_list = neg_tmp(0).toString.split(",")
println(neg_list)

结果:
neg_tmp: List[Any] = List(ke,sun,tian,sun)
1
neg_list: Array[String] = Array(ke, sun, tian, sun)

参考博客: 点击传送

List去重

 1, 最简单直接办法是用distinct

scala> val l = List(1,2,3,3,4,4,5,5,6,6,6,8,9)
l: List[Int] = List(1, 2, 3, 3, 4, 4, 5, 5, 6, 6, 6, 8, 9)

scala> l.distinct
res32: List[Int] = List(1, 2, 3, 4, 5, 6, 8, 9)


2, toSet

scala> l.toSet.toList
res33: List[Int] = List(5, 1, 6, 9, 2, 3, 8, 4)

参考博客: 点击传送

2.将list变为dataframe的一列

// 注 表格里值一定要统一格式 ,全转化为String(null除外,没意义) 如果没有则toDF方法报错
var lst = List[String]("57.54", "trusfortMeans", null, "20190720", "5852.00", null, null, "25.77", null)
var name_list = List("idm", "CO", "distrn","dayId", "Ant", "CLP", "CAC", "PE_num","CE")
import org.apache.spark.sql.functions._
import org.apache.spark.ml._

var df = List((lst.toArray)).toDF("features")
//df: org.apache.spark.sql.DataFrame = [id: int, features: vector]

df.show()
+--------------------+
|            features|
+--------------------+
|[57.54, trusfortM...|
+--------------------+

3.将list变为dataframe的一行

// name_list为列名  lst为一行的值

// 注 表格里值一定要统一格式 ,全转化为String(null除外,没意义) 如果没有则toDF方法报错
var lst = List[String]("57.54", "trusfortMeans", null, "20190720", "5852.00", null, null, "25.77", null)
var name_list = List("idm", "CO", "distrn","dayId", "Ant", "CLP", "CAC", "PE_num","CE")
import org.apache.spark.sql.functions._
import org.apache.spark.ml._

var df = List((lst.toArray)).toDF("features")
//df: org.apache.spark.sql.DataFrame = [id: int, features: vector]

df.show()
// +--------------------+
// |            features|
// +--------------------+
// |[57.54, trusfortM...|
// +--------------------+


// sizeof `elements` should be equal to the number of entries in column `features`
val elements = name_list.toArray

// Create a SQL-like expression using the array 
val sqlExpr = elements.zipWithIndex.map{ case (alias, idx) => col("features").getItem(idx).as(alias) }

// Extract Elements from dfArr    
df = df.select(sqlExpr : _*)
df.show()

df: org.apache.spark.sql.DataFrame = [features: array<string>]
+--------------------+
|            features|
+--------------------+
|[57.54, trusfortM...|
+--------------------+
df: org.apache.spark.sql.DataFrame = [idm: string, CO: string ... 7 more fields]
+-----+-------------+------+--------+-------+----+----+------+----+
|  idm|           CO|distrn|   dayId|    Ant| CLP| CAC|PE_num|  CE|
+-----+-------------+------+--------+-------+----+----+------+----+
|57.54|trusfortMeans|  null|20190720|5852.00|null|null| 25.77|null|
+-----+-------------+------+--------+-------+----+----+------+----+

参考链接:点击传送

  • 2
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 1
    评论
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值