[1014]PySpark使用笔记

背景

PySpark 通过 RPC server 来和底层的 Spark 做交互,通过 Py4j 来实现利用 API 调用 Spark 核心。
Spark (written in Scala) 速度比 Hadoop 快很多。Spark 配置可以各种参数,包括并行数目、资源占用以及数据存储的方式等等
Resilient Distributed Dataset (RDD) 可以被并行运算的 Spark 单元。它是 immutable, partitioned collection of elements

安装 PySpark

pip install pyspark

使用

连接 Spark Cluster

from pyspark import SparkContext, SparkConf
conf = SparkConf().setAppName("sparkAppExample")
sc = SparkContext(conf=conf)

Spark DataFrame

from pyspark.sql import SparkSession
spark = SparkSession.builder \
          .master("local") \
          .appName("Word Count").enableHiveSupport() \
          .config("spark.some.config.option", "some-value") \
          .getOrCreate()
# getOrCreate表明可以视情况新建session或利用已有的session
# 如果使用 hive table 则加上 .enableHiveSupport()
# 设置日志级别
spark.sparkContext.setLogLevel('ERROR')

Spark Config 条目

  • 配置大全网址

Spark Configuration

DataFrame 结构使用说明

PySpark 的 DataFrame 很像 pandas 里的 DataFrame 结构

读取本地文件

# Define the Data
import json
people = [
    {'name': 'Li', 'age': 12, 'address': {'country': 'China', 'city': 'Nanjing'}},
    {'name': 'Richard', 'age': 14, 'address': {'country': 'USA', 'city': 'Los Angeles'}},
    {'name': 'Jacob', 'age': 12, 'address': {'country': 'France', 'city': 'Paris'}},
    {'name': 'Manuel', 'age': 12, 'address': {'country': 'UK', 'city': 'London'}},
    {'name': 'Kio', 'age': 16, 'address': {'country': 'Japan', 'city': 'Tokyo'}},
]
json.dump(people, open('people.json', 'w'))

# Load Data into PySpark automatically
df = spark.read.load('people.json', format='json')

查看 DataFrame 结构

# Peek into dataframe
df
# DataFrame[address: struct<city:string,country:string>, age: bigint, name: string]

df.show(2)
"""
+------------------+---+-------+
|           address|age|   name|
+------------------+---+-------+
|  [Nanjing, China]| 12|     Li|
|[Los Angeles, USA]| 14|Richard|
+------------------+---+-------+
only showing top 2 rows
"""

df.columns
# ['address', 'age', 'name']

df.printSchema()
"""
root
 |-- address: struct (nullable = true)
 |    |-- city: string (nullable = true)
 |    |-- country: string (nullable = true)
 |-- age: long (nullable = true)
 |-- name: string (nullable = true)
"""

自定义 schema

from pyspark.sql.types import StructField, MapType, StringType, IntegerType, StructType
# 常用的还包括 DateType 等

people_schema= StructType([
    StructField('address', MapType(StringType(), StringType()), True),
    StructField('age', LongType(), True),
    StructField('name', StringType(), True),
])

df = spark.read.json('people.json', schema=people_schema)

df.show(1)
"""
+--------------------+---+----+
|             address|age|name|
+--------------------+---+----+
|[country -> China...| 12|  Li|
+--------------------+---+----+
only showing top 1 row
"""

df.dtypes
# [('address', 'map<string,string>'), ('age', 'bigint'), ('name', 'string')]

选择过滤数据

# Select column
address_df = df.select(['address.city'])
# DataFrame[city: string]

# Filter column with value
df.filter(df.age == 12).show()
"""
+----------------+---+------+
|         address|age|  name|
+----------------+---+------+
|[Nanjing, China]| 12|    Li|
| [Paris, France]| 12| Jacob|
|    [London, UK]| 12|Manuel|
+----------------+---+------+
"""

nj_df = df.filter('address.city == "Nanjing"')
nj_df.show()
"""
+--------------------+---+----+
|             address|age|name|
+--------------------+---+----+
|[country -> China...| 12|  Li|
+--------------------+---+----+
"""

# 选择数据头
df.head(2)
"""
[ 
  Row(address={'country': 'China', 'city': 'Nanjing'}, age=12, name='Li'), 
  Row(address={'country': 'USA', 'city': 'Los Angeles'}, age=14, name='Richard')
]
"""

提取数据

people = df.collect()
# return list of all Row class

len(people)
# 5

df.select('age').distinct().collect()
# [Row(age=12), Row(age=14), Row(age=16)]

Row & Column

# ---------------- row -----------------------

first_row = df.head()
# Row(address=Row(city='Nanjing', country='China'), age=12, name='Li')

# 读取行内某一列的属性值
first_row['age']           # 12
first_row.age              # 12
getattr(first_row, 'age')  # 12
first_row.address
# Row(city='Nanjing', country='China')

# -------------- column -----------------------

first_col = df[0]
first_col = df['adress']
# Column<b'address'>

# copy column[s]
address_copy = first_col.alias('address_copy')

# rename column / create new column
df.withColumnRenamed('age', 'birth_age')
df.withColumn('age_copy', df['age']).show(1)
"""
+----------------+---+----+--------+
|         address|age|name|age_copy|
+----------------+---+----+--------+
|[Nanjing, China]| 12|  Li|      12|
+----------------+---+----+--------+
only showing top 1 row
"""

df.withColumn('age_over_18',df['age'] > 18).show(1)
"""
+----------------+---+----+-----------+
|         address|age|name|age_over_18|
+----------------+---+----+-----------+
|[Nanjing, China]| 12|  Li|      false|
+----------------+---+----+-----------+
only showing top 1 row
"""

原始 sql 查询语句

df.createOrReplaceTempView("people")
sql_results = spark.sql("SELECT count(*) FROM people")
sql_results.show()
"""
+--------+
|count(1)|
+--------+
|       5|
+--------+
"""

pyspark.sql.function 示例

from pyspark.sql import functions as F
import datetime as dt

# 装饰器使用
@F.udf()
def calculate_birth_year(age):
    this_year = dt.datetime.today().year
    birth_year = this_year - age
    return birth_year 

calculated_df = df.select("*", calculate_birth_year('age').alias('birth_year'))
calculated_df .show(2)
"""
+------------------+---+-------+----------+
|           address|age|   name|birth_year|
+------------------+---+-------+----------+
|  [Nanjing, China]| 12|     Li|      2008|
|[Los Angeles, USA]| 14|Richard|      2006|
+------------------+---+-------+----------+
only showing top 2 rows
"""

# pyspark.sql.function 下很多函保活 udf(用户自定义函数)可以很好的并行处理大数据
# 这就是传说中的函数式编程,进度条显示可能如下:
# [Stage 41: >>>>>>>>>>>>>>>>>                    (0 + 1) / 1]

来源:https://zhuanlan.zhihu.com/p/171813899
https://blog.csdn.net/cymy001/article/details/78483723

  • 其它阅读:

pyspark 自定义聚合函数 UDAF:https://www.cnblogs.com/wdmx/p/10156500.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值