Spark 中 Dataset.show 如何使用?有哪些值得注意的地方?

前言

本文隶属于专栏《大数据技术体系》,该专栏为笔者原创,引用请注明来源,不足和错误之处请在评论区帮忙指出,谢谢!

本专栏目录结构和参考文献请见大数据技术体系


WHAT

Dataset.show 在我们平常 Spark 开发测试中经常使用。

它可以用来展示 Dataset


重载方法

def show(numRows: Int): Unit

def show(): Unit

def show(truncate: Boolean): Unit

def show(numRows: Int, truncate: Boolean): Unit

def show(numRows: Int, truncate: Int): Unit
 
def show(numRows: Int, truncate: Int, vertical: Boolean): Unit

使用注意点

上面的 6 个重载方法有哪些值得注意的地方呢?

1. vertical

show() 方法存在 2 种打印模式

默认是横式的,如下所示:

  year  month AVG('Adj Close) MAX('Adj Close)
  1980  12    0.503218        0.595103
  1981  01    0.523289        0.570307
  1982  02    0.436504        0.475256
  1983  03    0.410516        0.442194
  1984  04    0.450090        0.483521

另一种是竖式的,如下所示:

-RECORD 0-------------------
 year            | 1980
 month           | 12
 AVG('Adj Close) | 0.503218
 AVG('Adj Close) | 0.595103
-RECORD 1-------------------
 year            | 1981
 month           | 01
 AVG('Adj Close) | 0.523289
 AVG('Adj Close) | 0.570307
-RECORD 2-------------------
 year            | 1982
 month           | 02
 AVG('Adj Close) | 0.436504
 AVG('Adj Close) | 0.475256
-RECORD 3-------------------
 year            | 1983
 month           | 03
 AVG('Adj Close) | 0.410516
 AVG('Adj Close) | 0.442194
-RECORD 4-------------------
 year            | 1984
 month           | 04
 AVG('Adj Close) | 0.450090
 AVG('Adj Close) | 0.483521

2. numRows

show() 方法可以通过设置 numRows 来控制最终返回多少行数据,默认 20。

3. truncate

  1. show() 方法可以通过设置 truncate 参数来控制单个数据列字符串最长显示的长度,并且所有列都会靠右对齐。
  2. 字符串如果超过 truncate(默认是 20),将会截取前面的 truncate - 3 长度,后面再加上 ...
str.substring(0, truncate - 3) + "..."
  1. 对于数据类型是 Array[Byte] 的数据列,会用"[", " ", "]" 的格式输出
binary.map("%02X".format(_)).mkString("[", " ", "]")

Dataset.show 具体的源码解析请参考我的这篇博客——Spark SQL 工作流程源码解析(四)optimization 阶段(基于 Spark 3.3.0)


实践

源码下载

spark-examples 代码已开源,本项目致力于提供最具实践性的 Apache Spark 代码开发学习指南。

点击链接前往 github 下载源码:spark-examples


数据

{"name": "Alice","age": 18,"sex": "Female","addr": ["address_1","address_2", " address_3"]}
{"name": "Thomas","age": 20, "sex": "Male","addr": ["address_1"]}
{"name": "Tom","age": 50, "sex": "Male","addr": ["address_1","address_2","address_3"]}
{"name": "Catalina","age": 30, "sex": "Female","addr": ["address_1","address_2"]}

代码

package com.shockang.study.spark.sql.show

import com.shockang.study.spark.SQL_DATA_DIR
import com.shockang.study.spark.util.Utils.formatPrint
import org.apache.spark.sql.SparkSession

/**
 *
 * @author Shockang
 */
object ShowExample {

  val DATA_PATH: String = SQL_DATA_DIR + "user.json"

  def main(args: Array[String]): Unit = {
    val spark = SparkSession.builder
      .master("local[*]")
      .appName("ShowExample")
      .getOrCreate()

    spark.sparkContext.setLogLevel("ERROR")

    spark.read.json(DATA_PATH).createTempView("t_user")

    val df = spark.sql("SELECT * FROM t_user")

    formatPrint("""df.show""")
    df.show

    formatPrint("""df.show(2)""")
    df.show(2)

    formatPrint("""df.show(true)""")
    df.show(true)
    formatPrint("""df.show(false)""")
    df.show(false)

    formatPrint("""df.show(2, truncate = true)""")
    df.show(2, truncate = true)
    formatPrint("""df.show(2, truncate = false)""")
    df.show(2, truncate = false)

    formatPrint("""df.show(2, truncate = 0)""")
    df.show(2, truncate = 0)
    formatPrint("""df.show(2, truncate = 20)""")
    df.show(2, truncate = 20)

    formatPrint("""df.show(2, truncate = 0, vertical = true)""")
    df.show(2, truncate = 0, vertical = true)
    formatPrint("""df.show(2, truncate = 20, vertical = true)""")
    df.show(2, truncate = 20, vertical = true)
    formatPrint("""df.show(2, truncate = 0, vertical = false)""")
    df.show(2, truncate = 0, vertical = false)
    formatPrint("""df.show(2, truncate = 20, vertical = false)""")
    df.show(2, truncate = 20, vertical = false)

    spark.stop()
  }
}

打印

========== df.show ==========
+--------------------+---+--------+------+
|                addr|age|    name|   sex|
+--------------------+---+--------+------+
|[address_1, addre...| 18|   Alice|Female|
|         [address_1]| 20|  Thomas|  Male|
|[address_1, addre...| 50|     Tom|  Male|
|[address_1, addre...| 30|Catalina|Female|
+--------------------+---+--------+------+

========== df.show(2) ==========
+--------------------+---+------+------+
|                addr|age|  name|   sex|
+--------------------+---+------+------+
|[address_1, addre...| 18| Alice|Female|
|         [address_1]| 20|Thomas|  Male|
+--------------------+---+------+------+
only showing top 2 rows

========== df.show(true) ==========
+--------------------+---+--------+------+
|                addr|age|    name|   sex|
+--------------------+---+--------+------+
|[address_1, addre...| 18|   Alice|Female|
|         [address_1]| 20|  Thomas|  Male|
|[address_1, addre...| 50|     Tom|  Male|
|[address_1, addre...| 30|Catalina|Female|
+--------------------+---+--------+------+

========== df.show(false) ==========
+----------------------------------+---+--------+------+
|addr                              |age|name    |sex   |
+----------------------------------+---+--------+------+
|[address_1, address_2,  address_3]|18 |Alice   |Female|
|[address_1]                       |20 |Thomas  |Male  |
|[address_1, address_2, address_3] |50 |Tom     |Male  |
|[address_1, address_2]            |30 |Catalina|Female|
+----------------------------------+---+--------+------+

========== df.show(2, truncate = true) ==========
+--------------------+---+------+------+
|                addr|age|  name|   sex|
+--------------------+---+------+------+
|[address_1, addre...| 18| Alice|Female|
|         [address_1]| 20|Thomas|  Male|
+--------------------+---+------+------+
only showing top 2 rows

========== df.show(2, truncate = false) ==========
+----------------------------------+---+------+------+
|addr                              |age|name  |sex   |
+----------------------------------+---+------+------+
|[address_1, address_2,  address_3]|18 |Alice |Female|
|[address_1]                       |20 |Thomas|Male  |
+----------------------------------+---+------+------+
only showing top 2 rows

========== df.show(2, truncate = 0) ==========
+----------------------------------+---+------+------+
|addr                              |age|name  |sex   |
+----------------------------------+---+------+------+
|[address_1, address_2,  address_3]|18 |Alice |Female|
|[address_1]                       |20 |Thomas|Male  |
+----------------------------------+---+------+------+
only showing top 2 rows

========== df.show(2, truncate = 20) ==========
+--------------------+---+------+------+
|                addr|age|  name|   sex|
+--------------------+---+------+------+
|[address_1, addre...| 18| Alice|Female|
|         [address_1]| 20|Thomas|  Male|
+--------------------+---+------+------+
only showing top 2 rows

========== df.show(2, truncate = 0, vertical = true) ==========
-RECORD 0----------------------------------
 addr | [address_1, address_2,  address_3] 
 age  | 18                                 
 name | Alice                              
 sex  | Female                             
-RECORD 1----------------------------------
 addr | [address_1]                        
 age  | 20                                 
 name | Thomas                             
 sex  | Male                               
only showing top 2 rows

========== df.show(2, truncate = 20, vertical = true) ==========
-RECORD 0--------------------
 addr | [address_1, addre... 
 age  | 18                   
 name | Alice                
 sex  | Female               
-RECORD 1--------------------
 addr | [address_1]          
 age  | 20                   
 name | Thomas               
 sex  | Male                 
only showing top 2 rows

========== df.show(2, truncate = 0, vertical = false) ==========
+----------------------------------+---+------+------+
|addr                              |age|name  |sex   |
+----------------------------------+---+------+------+
|[address_1, address_2,  address_3]|18 |Alice |Female|
|[address_1]                       |20 |Thomas|Male  |
+----------------------------------+---+------+------+
only showing top 2 rows

========== df.show(2, truncate = 20, vertical = false) ==========
+--------------------+---+------+------+
|                addr|age|  name|   sex|
+--------------------+---+------+------+
|[address_1, addre...| 18| Alice|Female|
|         [address_1]| 20|Thomas|  Male|
+--------------------+---+------+------+
only showing top 2 rows
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我来帮你回答这个问题。首先,ChiSqSelector是一个特征选择器,可以用于选择特征对分类任务有显著影响的那些特征。在Spark,可以使用ChiSqSelector类来实现这一功能。下面是一个示例代码: ```java import org.apache.spark.ml.feature.ChiSqSelector; import org.apache.spark.ml.feature.ChiSqSelectorModel; import org.apache.spark.ml.feature.VectorAssembler; import org.apache.spark.sql.Dataset; import org.apache.spark.sql.Row; import org.apache.spark.sql.SparkSession; public class ChiSqSelectorExample { public static void main(String[] args) { SparkSession spark = SparkSession.builder() .appName("ChiSqSelectorExample") .config("spark.master", "local") .getOrCreate(); // load data Dataset<Row> data = spark.read().format("libsvm") .load("data/mllib/sample_libsvm_data.txt"); // feature engineering VectorAssembler assembler = new VectorAssembler() .setInputCols(data.columns()) .setOutputCol("features"); Dataset<Row> dataset = assembler.transform(data) .select("label", "features"); // feature selection ChiSqSelector selector = new ChiSqSelector() .setNumTopFeatures(3) .setFeaturesCol("features") .setLabelCol("label") .setOutputCol("selectedFeatures"); ChiSqSelectorModel model = selector.fit(dataset); Dataset<Row> result = model.transform(dataset); // print result result.show(); // print selected feature names String[] selectedFeatures = model.selectedFeatures() .stream() .map(i -> dataset.columns()[(int) i]) .toArray(String[]::new); System.out.println("Selected Features: "); for (String feature : selectedFeatures) { System.out.println(feature); } spark.stop(); } } ``` 在这个示例代码,我们首先加载了一个libsvm格式的样本数据集,然后使用VectorAssembler将所有特征向量组合成一个features列。接着,我们使用ChiSqSelector选择了与标签最相关的3个特征,并将结果存储在selectedFeatures列。最后,我们打印了结果数据集,并输出了被选择出来的特征列的名称。 希望这个示例代码可以帮助你理解如何在Java使用Spark实现ChiSqSelector功能,并输出所选出的特征列的名称。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值