前面已经介绍过有关sparksql读取json文件取得DataSet的功能,但实际开发中除了json外还可以使用csv、数据库等作为sparksql的数据源,因为csv日常开发也用的很多所以借此机会把我的学习代码分享给大家
一 关于csv的schema
sparksql读取csv可以根据csv文件的第一行作为header自动推导出列名或schema,也可以通过手动的方式指定schema,自动推导读取csv时需要指定option参数,看下官方的文档
You can set the following CSV-specific options to deal with CSV files:
sep
(default,
): sets a single character as a separator for each field and value.encoding
(defaultUTF-8
): decodes the CSV files by the given encoding type.quote
(default"
): sets a single character used for escaping quoted values where the separator can be part of the value. If you would like to turn off quotations, you need to set notnull
but an empty string. This behaviour is different fromcom.databricks.spark.csv
.escape
(default\
): sets a single character used for escaping quotes inside an already quoted value.charToEscapeQuoteEscaping
(defaultescape
or\0
): sets a single character used for escaping the escape for the quote character. The default value is escape character when escape and quote characters are different,\0
otherwise.comment
(default empty string): sets a single character used for skipping lines beginning with this character. By default, it is disabled.header
(defaultfalse
): uses the first line as names of columns.enforceSchema
(defaulttrue
): If it is set totrue
, the specified or inferred schema will be forcibly applied to datasource files, and headers in CSV files will be ignored. If the option is set tofalse
, the schema will be validated against all headers in CSV files in the case when theheader
option is set totrue
. Field names in the schema and column names in CSV headers are checked by their positions taking into accountspark.sql.caseSensitive
. Though the default value is true, it is recommended to disable theenforceSchema
option to avoid incorrect results.inferSchema
(defaultfalse
): infers the input schema automatically from data. It requires one extra pass over the data.samplingRatio
(default is 1.0): defines fraction of rows used for schema inferring.ignoreLeadingWhiteSpace
(defaultfalse
): a flag indicating whether or not leading whitespaces from values being read should be skipped.ignoreTrailingWhiteSpace
(defaultfalse
): a flag indicating whether or not trailing whitespaces from values being read should be skipped.nullValue
(default empty string): sets the string representation of a null value. Since 2.0.1, this applies to all supported types including the string type.emptyValue
(default empty string): sets the string representation of an empty value.nanValue
(defaultNaN
): sets the string representation of a non-number" value.positiveInf
(defaultInf
): sets the string representation of a positive infinity value.negativeInf
(default-Inf
): sets the string representation of a negative infinity value.dateFormat
(defaultyyyy-MM-dd
): sets the string that indicates a date format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to date type.timestampFormat
(defaultyyyy-MM-dd'T'HH:mm:ss.SSSXXX
): sets the string that indicates a timestamp format. Custom date formats follow the formats atjava.text.SimpleDateFormat
. This applies to timestamp type.maxColumns
(default20480
): defines a hard limit of how many columns a record can have.maxCharsPerColumn
(default-1
): defines the maximum number of characters allowed for any given value being read. By default, it is -1 meaning unlimited lengthmode
(defaultPERMISSIVE
): allows a mode for dealing with corrupt records during parsing. It supports the following case-insensitive modes.PERMISSIVE
: when it meets a corrupted record, puts the malformed string into a field configured bycolumnNameOfCorruptRecord
, and sets other fields tonull
. To keep corrupt records, an user can set a string type field namedcolumnNameOfCorruptRecord
in an user-defined schema. If a schema does not have the field, it drops corrupt records during parsing. A record with less/more tokens than schema is not a corrupted record to CSV. When it meets a record having fewer tokens than the length of the schema, setsnull
to extra fields. When the record has more tokens than the length of the schema, it drops extra tokens.DROPMALFORMED
: ignores the whole corrupted records.FAILFAST
: throws an exception when it meets corrupted records.
columnNameOfCorruptRecord
(default is the value specified inspark.sql.columnNameOfCorruptRecord
): allows renaming the new field having malformed string created byPERMISSIVE
mode. This overridesspark.sql.columnNameOfCorruptRecord
.multiLine
(defaultfalse
): parse one record, which may span multiple lines.
参数看起来特别多,但是大多都有默认值,实际读取的时候只需指定很少的就行了,如下所示
Dataset<Row> ds=spark.read()
//自动推断列类型
.option("inferSchema", "true")
//指定一个指示空值的字符串
.option("nullvalue", "?")
//当设置为 true 时,第一行文件将被用来命名列,而不包含在数据中
.option("header", "true")
.csv("/home/cry/myStudyData/userList.csv");
如果不喜欢这种方式也可以选择手动方式指定schema
List<StructField> fs=new ArrayList<StructField>();
StructField f1=DataTypes.createStructField("id", DataTypes.IntegerType, true);
StructField f2=DataTypes.createStructField("name", DataTypes.StringType, true);
StructField f3=DataTypes.createStructField("age", DataTypes.IntegerType, true);
fs.add(f1);
fs.add(f2);
fs.add(f3);
StructType schema=DataTypes.createStructType(fs);
Dataset<Row> ds=spark.read().schema(schema).csv("/home/cry/myStudyData");
二 完整的代码
package com.debug;
import java.util.ArrayList;
import java.util.List;
import org.apache.spark.sql.Dataset;
import org.apache.spark.sql.Row;
import org.apache.spark.sql.SparkSession;
import org.apache.spark.sql.types.DataTypes;
import org.apache.spark.sql.types.StructField;
import org.apache.spark.sql.types.StructType;
public class ReadCsv {
public static void main(String[] args) {
SparkSession spark = SparkSession.builder().appName("读取csv做统计").master("local[*]").getOrCreate();
List<StructField> fs=new ArrayList<StructField>();
StructField f1=DataTypes.createStructField("id", DataTypes.IntegerType, true);
StructField f2=DataTypes.createStructField("name", DataTypes.StringType, true);
StructField f3=DataTypes.createStructField("age", DataTypes.IntegerType, true);
fs.add(f1);
fs.add(f2);
fs.add(f3);
StructType schema=DataTypes.createStructType(fs);
/*Dataset<Row> ds=spark.read()
//自动推断列类型
.option("inferSchema", "true")
//指定一个指示空值的字符串
.option("nullvalue", "?")
//当设置为 true 时,第一行文件将被用来命名列,而不包含在数据中
.option("header", "true")
.csv("/home/cry/myStudyData/userList.csv");*/
Dataset<Row> ds=spark.read().schema(schema).csv("/home/cry/myStudyData");
ds.createOrReplaceTempView("user");
Dataset<Row> res=spark.sql("select * from user where age>25");
res.show();
spark.stop();
}
}
其中的一个csv内容如下