pyspark汇总小结

20221027

pyspark连接mysql问题

java.lang.ClassNotFoundException: com.mysql.cj.jdbc.Driver
下载并放到pyspark的jars文件夹下mysql-connector-java-8.0.25.jar

20220427

# code:utf-8
import findspark
# findspark.init()
import pandas as pd

findspark.init(r"D:\Python37\Lib\site-packages\pyspark")
import os
java8_location = r'D:\Java\jdk1.8.0_301/'  # 设置你自己的路径
os.environ['JAVA_HOME'] = java8_location
from pyspark.sql import SparkSession

os.environ.setdefault('HADOOP_USER_NAME', 'root')
spark = SparkSession.builder \
    .master("local[4]")\
    .config('spark.sql.debug.maxToStringFields', 2000) \
    .config('spark.debug.maxToStringFields', 2000) \
    .getOrCreate()
spark.conf.set("spark.sql.catalog.iceberg", "org.apache.iceberg.spark.SparkCatalog")
spark.conf.set("spark.sql.catalog.iceberg.type", "hive")
spark.conf.set("spark.sql.catalog.iceberg.uri", "thrift://192.168.1.54:9083")
# iceberg是数据库,trino是查询运行 两者装在不同的服务器上  这里spark是直接连接iceberg
# 数据库连接的时候,是连接的trino 所以二者的ip不一样, 通过spark写入iceberg


if __name__ == '__main__':

    from pyspark.sql.functions import lit
    test = spark.sql("select * from iceberg.ice_dwt.dwt_dm_bi_b2b_customer_churn_wide limit 1;")
    test = test.withColumn("new_dt",lit('2022-04-27'))
    test = test.drop('dt')
    test = test.withColumnRenamed('new_dt','dt')
    test.show()
    test.write.saveAsTable("iceberg.ice_dwt.dwt_dm_bi_b2b_customer_churn_wide", None, "append")
    print('插入数据完成')
    
    #################

    # 样例测试
    # pdf = spark.sql("show tables")
    # test = spark.sql("select * from iceberg.test.flink_test;")
    # df_test = pd.DataFrame({'id':5,"name":'haha'},index=[0])
    # df_test = spark.createDataFrame(df_test)
    # df_test.show()
    # df_test.write.saveAsTable("iceberg.test.flink_test",None,"append")
    # print('插入数据完成')
    print()

20220427

df1 = df.drop('Category')
df1.show()
df2 = df.drop('Category', 'ID')
df2.show()
删除列
from pyspark.sql.functions import lit
dm.withColumn('Flag_last_entry',lit(0))\
     .withColumn('Flag_2',lit(0)) 
新增常量列 

20220415

https://ohmyweekly.github.io/notes/2020-10-04-interoperability-between-koalas-and-apache-spark/
koalas,dataframe和spark的桥梁

20220411

  uo = spark_big.read.csv(
        config["hdfs_path_uo"] + "/*"
        , header=True)
第一行是列名称的时候,直接header=True 不用再指定schema

Exception: Java gateway process exited before sending its port number
https://blog.csdn.net/qq_24406903/article/details/85167356

 File file:/data/engine-customer-churn-bigdata/temp/temp_od.csv does not exist
It is possible the underlying files have been updated

spark可以只写在本地主机的一个位置,但是读的时候就不行了
因为其他节点上并没有此路径和文件  所以最好是写入hdfs
并从hdfs上写

spark_big.read.csv("路径")

    od_all = spark_big.read.options(header='True',delimiter=',').csv("file///"+PATH+"/temp/temp_od.csv")

    od_all = spark_big.read.format('com.databricks.spark.csv').options(header='true', inferschema='true').load(
        PATH+"/temp/temp_od.csv")

要从本地读取在路径前面加file:///  注意linux是三个反斜杠
22/04/09 16:28:12 ERROR Inbox: An error happened while processing message in the inbox for CoarseGrainedScheduler
java.lang.OutOfMemoryError: Java heap space

22/04/09 16:28:14 ERROR Inbox: An error happened while processing message in the inbox for CoarseGrainedScheduler
java.lang.OutOfMemoryError: Java heap space
Exception in thread "dispatcher-CoarseGrainedScheduler" java.lang.OutOfMemoryError: Java heap space

import findspark
findspark.init()
from pyspark.sql import SparkSession

appname = "customer_churn"
spark_big = (
            SparkSession.Builder()
            .appName(appname)
            .master('spark://192.168.1.122:7077')
            .master('yarn')   
            # 这通过yarn,hadoop来跑速度才能真正快起来,spark自身的调度管理不好用,集群用不上
            
            .config("spark.dynamicAllocation.enabled", "false")
            
            .config("spark.num-executors", "5")
            .config("spark.executor.memory", "2g")
            .config("spark.driver.memory", "4g")
            .config("spark.executor.cores", "4")
            
            .config("spark.default.parallelism", "500")
            .config("spark.sql.shuffle.partitions", "500")
            .config("spark.speculation", "True")
            .config("spark.speculation.interval", "100")
            .config("spark.speculation.quantile", "0.75")
            .config("spark.speculation.multiplier", "1.5")
            .config("spark.scheduler.mode", "FAIR")
            .config("spark.driver.extraJavaOptions", "-Xss4048M")
            .config("spark.rpc.message.maxSize", "1024")
            # .config("spark.memory.offHeap.enabled","true")
            # .config("spark.memory.offHeap.size","2g")
            .getOrCreate()
)



解决方案 降低集群资源的占用量,更少的executor,防止过多的内存浪费,数据量不够多的时候
盲目设置过度的executor反而是浪费资源(因为堆内存是进程共享的),留更多的内存资源给调度进程,而且降低整个计算的速度?


20220408

linux下写入csv如果不合并成一个文件是可以成功的
写入到两个文件是可以成功的
在这里插入图片描述
通过yarn运行

通过spark-submit或者yarn方式提交任务才能在控制台看到资源耗费情况
直接代码运行看不到?
最好用yarn的方式运行,而不是spark自己的任务调度器

通过 ip:8088看具体的资源耗费情况
在这里插入图片描述
AttributeError: ‘NoneType’ object has no attribute ‘sc’
当要多次开启和关闭spark的时候,必须要两个不同的模块来生成,我估计是两个不同的名字

20220407

 # # uo.write.mode("overwrite").save("hdfs://k8s04:9001/data/uo")
    # # od.write.mode("overwrite").save("hdfs://k8s04:9001/data/od")

    # from step5_build_spark_env_small import spark_small
    # uo = spark_small.read.parquet("hdfs://k8s04:9001/data/uo/*")
    # od = spark_small.read.parquet("hdfs://k8s04:9001/data/od/*")
    写入hdfs并从hdfs读出
 uo_output = "file://"+PATH+"/temp/uo"  

 uo_output = "file:///"+PATH+"/temp/uo"  
 windows是三个反斜杠
   uo.repartition(1).write.format('csv').option('header','true').save(uo_output,mode='overwrite')
train.write.format('com.databricks.spark.csv').save('file_after_processing.csv')
pyspark快速写入csv到本地磁盘,写入磁盘的是两个文件uo是自己建的文件夹
注意是option加入选项配置而不是options

TypeError: options() takes 1 positional argument but 3 were given
options的位置写的有错

https://blog.csdn.net/jingyi130705008/article/details/108236217
保存为本地一个csv文件 重点
parquet也可以保存到本地

https://www.jianshu.com/p/80964332b3c4
保存为一个csv文件

在这里插入图片描述
本地资源占用少的运行配置

https://www.csdn.net/tags/Mtjacg3sNjU5MzQtYmxvZwO0O0OO0O0O.html
加快toPandas()

RuntimeError: Java gateway process exited before sending its port number

# coding:utf-8
import findspark
#findspark.init()
findspark.init("/usr/local/python3/lib/python3.7/site-packages/pyspark")
import os 
from pyspark.sql import SparkSession
import os
from pyspark.sql import SparkSession
os.environ["JAVA_HOME"] = "/usr/local/jdk1.8.0_212/"    
这里并不是bin目录
#os.environ["PATHONPATH"]= "/usr/local/bin/python3"
appname = "single_local"
spark = (
    SparkSession.Builder()
    .appName(appname)
    .master('local[6]')
    .config("spark.dynamicAllocation.enabled", "false")
    .config("spark.num-executors", "2")
    .config("spark.executor.memory", "4g")
    .config("spark.executor.cores", "2")
    .config("spark.driver.memory", "4g")
    .getOrCreate()
)   
df = spark.createDataFrame([
    ('b', 2)
], schema='name string, age int')
    
df.show()
df.createOrReplaceTempView("t1")
uo = spark.read.parquet("hdfs://k8s04:9001/data/uo/*")
# 读取hdfs文件
uo.show(1)
spark.stop()

构建sparkcontext


20220402

https://blog.csdn.net/bowenlaw/article/details/106826553
https://zhuanlan.zhihu.com/p/34901558
pyspark读写保存

https://blog.csdn.net/qq_40285736/article/details/106690465
python读取hdfs,端口为web端口9870

   od.select("user_id", "goods_nums_total", "category_level2_nums_total").dropDuplicates().show()
   左边无等号

https://blog.csdn.net/a8131357leo/article/details/108590299
pyspark join重复列问题

Spark报Total size of serialized results of 12189 tasks is bigger than spark.driver.maxResultSize
https://blog.csdn.net/qq_27600723/article/details/107023574
pyspark读写iceberg

# code:utf-8
import findspark
findspark.init(r"D:\Python37\Lib\site-packages\pyspark")
这里要指定pyspark的路径,如果是服务器的话最好用spark所在的pyspark路径
import os
java8_location = r'D:\Java\jdk1.8.0_301/'  # 设置你自己的路径
os.environ['JAVA_HOME'] = java8_location
from pyspark.sql import SparkSession

def get_spark():
    # pyspark 读iceberg表
    spark = SparkSession.builder.getOrCreate()
    spark.conf.set("spark.sql.catalog.iceberg", "org.apache.iceberg.spark.SparkCatalog")
    spark.conf.set("spark.sql.catalog.iceberg.type", "hive")
    spark.conf.set("spark.sql.catalog.iceberg.uri", "thrift://192.168.1.54:9083")
    不同的目标地址,不同的服务器集群,要拷贝对应的两个hive文件到当地客户端的pyspar conf文件夹下
    return spark


if __name__ == '__main__':
    spark = get_spark()
    pdf = spark.sql("select shangpgg from iceberg.test.end_spec limit 10")
    spark.sql("insert into iceberg.test.end_spec  values ('aa','bb')")

    pdf.show()
    print()
1. 在pyspark下新建conf文件夹,把iceberg下的两个hive配置文件
放在下面
hdfs-site.xml
hive-site.xm
2. iceberg-spark3-runtime-0.13.1.jar 把这个文件放在pyspark的jars文件夹

Failed to open input stream for file: hdfs://ns1/warehouse/test.db/end_spec/metadata/00025-73e8d58b-c4f1-4c81-b0a8-f1a8a12090b1.metadata.json
org.apache.iceberg.exceptions.RuntimeIOException: Failed to open input stream for file: hdfs://ns1/warehouse/test.db/end_spec/metadata/00025-73e8d58b-c4f1-4c81-b0a8-f1a8a12090b1.metadata.json

没找到hive的两个配置文件,需要在init里面指定pyspark的路径即可解决
# findspark.init(r"D:\Python37\Lib\site-packages\pyspark")

      od_all = spark.createDataFrame(od)
    od_all.createOrReplaceTempView('od_all')
    od_duplicate = spark.sql("select distinct user_id,goods_id,category_second_id from od_all;")
    od_duplicate.createOrReplaceTempView('od_duplicate')
    od_goods_group = spark.sql(" select user_id,count(goods_id) goods_nums_total from od_duplicate group by user_id ;")
sql语句中所牵扯的表,需要createOrReplaceTempView创建
执行sql时出现错误 extraneous input ';' expecting EOF near '<EOF>'
https://blog.csdn.net/xieganyu3460/article/details/83055935

https://spark.apache.org/docs/latest/api/python/user_guide/pandas_on_spark/types.html?highlight=type
pyspark数据类型

TypeError: field id: Can not merge type <class 'pyspark.sql.types.StringType'> and <class 'pyspark.sql.types.LongType'>

https://blog.csdn.net/weixin_40983094/article/details/115630358

# code:utf-8
from pathlib import Path

import pandas as pd
from pyspark.ml.fpm import FPGrowth
import datetime
import platform
import os
import warnings
warnings.filterwarnings("ignore")
from utils_ import usetime,log_generate
from param_config import config

logger = log_generate(config.log["name"], config.log["date"])

sys = platform.system()
if sys == "Windows":
    PATH = os.path.abspath(str(Path("").absolute())) + "/"
else:
    PATH = "/home/guanlian_algo_confirm3/"

os.environ["JAVA_HOME"] = r"D:\Java\jdk1.8.0_301"

t1 = datetime.datetime.now()

@usetime
def calculate_fpgrowth(spark, data, total_nums):

    data = spark.createDataFrame(data)
    data.createOrReplaceTempView("all_data")
    part_data = spark.sql("select * from all_data ")

    all_record = part_data.select("goods_huizong")  # 筛选多列
    all_record.show(5)

    def transform_to_list(col):
        per_row = col.split("|")  # 传入的列数据,自动对每行数据进行处理
        return per_row

    all_record = all_record.rdd.map(
        lambda row: (row["goods_huizong"], transform_to_list(row["goods_huizong"]))
    )
    all_record = spark.createDataFrame(
        all_record, ["goods_huizong", "goods_huizong_list"]
    )
    all_record.show(5)

    all_record = all_record.select("goods_huizong_list")

    all_record = all_record.withColumnRenamed("goods_huizong_list", "items")
    logger.debug()("总数据条数 {}".format(total_nums))
    fp = FPGrowth(minSupport=0.0001, minConfidence=0.8)
    fpm = fp.fit(all_record)  # 模型拟合
    fpm.freqItemsets.show(5)  # 在控制台显示前五条频繁项集
    fp_count = fpm.freqItemsets.count()
    if fp_count == 0:
        return pd.DataFrame()
    logger.debug()("*" * 100)
    logger.debug()("频繁项条数 {} ".format(fp_count))
    ass_rule = fpm.associationRules  # 强关联规则
    ass_rule.show()
    rule_nums = ass_rule.count()
    if rule_nums == 0:
        return pd.DataFrame()

    logger.debug()("规则条数 {} ".format(rule_nums))
    ass_rule = ass_rule.select(["antecedent", "consequent", "confidence"])
    ass_rule.show(5)
    ass_rule_df = ass_rule.toPandas()
    ass_rule_df["antecedent_str"] = ass_rule_df["antecedent"].apply(lambda x: str(x))
    ass_rule_df.sort_values(
        ["antecedent_str", "confidence"], ascending=[True, False], inplace=True
    )

    t2 = datetime.datetime.now()
    logger.debug()("spent ts:", t2 - t1)
    return ass_rule_df

简单实例

20220314

在这里插入图片描述代码设置参数比命令行传参数的级别高,最终用的还是代码里面设置的参数

py4j.protocol.Py4JJavaError: An error occurred while calling o24.sql.
: org.apache.spark.SparkException: Cannot find catalog plugin class for catalog 'iceberg': org.apache.iceberg.spark.SparkCatalog
需要去iceberg官网下载一个 iceberg-spark-runtime-3.2_2.12-0.13.1.jar包
放在spark的jars下面

https://iceberg.apache.org/docs/latest/getting-started/

在这里插入图片描述

# code:utf-8
import findspark
import pandas as pd
findspark.init()
from datetime import datetime, date
import re
from pyspark.sql import SparkSession
# from out_udf import outer_udf
#  /home/spark-3.1.2-bin-hadoop3.2/bin/spark-submit \
#  --master local  --py-files /root/bin/python_job/pyspark/out_udf.py hello_spark.py
# from pyspark.sql.functions import pandas_udf
spark = SparkSession.builder.getOrCreate()

df = spark.createDataFrame([
    (1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
    (2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
    (3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df.createOrReplaceTempView("t1")

# UDF- 匿名函数
spark.udf.register('xtrim', lambda x: re.sub('[ \n\r\t]', '', x), 'string')

# UDF 显式函数
def xtrim2(record):
    return re.sub('[ \n\r\t]', '', record)

# pyspark 读iceberg表
spark.conf.set("spark.sql.catalog.iceberg", "org.apache.iceberg.spark.SparkCatalog")
spark.conf.set("spark.sql.catalog.iceberg.type", "hive")
spark.conf.set("spark.sql.catalog.iceberg.uri", "thrift://192.168.1.54:9083")

spark.udf.register('xtrim2', xtrim2, 'string')

# spark.udf.register('outer_udf', outer_udf)


if __name__ == '__main__':
    df.show()

    spark.sql("select * from t1").show()

    spark.sql("select xtrim2('测试 数据    你好') ").show()
    spark.sql("use iceberg").show()
    spark.sql("show databases").show()

pyspark读取iceberg


from datetime import datetime, date
import re
from pyspark.sql import SparkSession
from out_udf import outer_udf
#  /home/spark-3.1.2-bin-hadoop3.2/bin/spark-submit \
#  --master local  --py-files /root/bin/python_job/pyspark/out_udf.py hello_spark.py
# from pyspark.sql.functions import pandas_udf


spark = SparkSession.builder.getOrCreate()

df = spark.createDataFrame([
    (1, 2., 'string1', date(2000, 1, 1), datetime(2000, 1, 1, 12, 0)),
    (2, 3., 'string2', date(2000, 2, 1), datetime(2000, 1, 2, 12, 0)),
    (3, 4., 'string3', date(2000, 3, 1), datetime(2000, 1, 3, 12, 0))
], schema='a long, b double, c string, d date, e timestamp')
df.createOrReplaceTempView("t1")

# UDF- 匿名函数
spark.udf.register('xtrim', lambda x: re.sub('[ \n\r\t]', '', x), 'string')

# UDF 显式函数
def xtrim2(record):
    return re.sub('[ \n\r\t]', '', record)

# pyspark 读iceberg表
spark.conf.set("spark.sql.catalog.iceberg", "org.apache.iceberg.spark.SparkCatalog")
spark.conf.set("spark.sql.catalog.iceberg.type", "hive")
spark.conf.set("spark.sql.catalog.iceberg.uri", "thrift://192.168.1.54:9083")



spark.udf.register('xtrim2', xtrim2, 'string')
spark.udf.register('outer_udf', outer_udf)


if __name__ == '__main__':
    df.show()

    spark.sql("select * from t1").show()

    spark.sql("select xtrim2('测试 数据    你好') ").show()
    spark.sql("select outer_udf('测试数据你好') ").show()

    spark.sql("use iceberg").show()
    spark.sql("show databases").show()

pyspark对iceberg(hive)进行操作

20220311

AttributeError: 'NoneType' object has no attribute 'sc' 解决方法!
把构建spark对象放在循环外面或者临时建一个sc对象?

spark的本质就是处理数据的代码换一种语言,另一种表达方式而已

参数调节
把executor数量调小,其他参数值调大,不容易报错

Spark任务报java.lang.StackOverflowError
https://blog.csdn.net/u010936936/article/details/88363449

Spark:java.io.IOException: No space left on device

https://blog.csdn.net/dupihua/article/details/51133551
ass_rule = ass_rule.filter('antecedent_len == 1')
    ass_rule = ass_rule.filter('consequent_len == 1')
dataframe过滤

https://blog.csdn.net/qq_40006058/article/details/88931884
dataframe各种操作

20220310

data = spark.createDataFrame(data) # 普通dataframe转spark dataframe
data.createOrReplaceTempView("all_data") # 转sql表进行操作
   part_data = spark.sql("select * from all_data where user_type= " + str(cus_type)) #sql操作
   

https://blog.csdn.net/zhurui_idea/article/details/73090951


    ass_rule = ass_rule.rdd.map(lambda row:(row["antecedent"],row['consequent'], calculate_len(row['antecedent'])))
    # rdd执行一下就变成了pipelinerdd
    ass_rule = spark.createDataFrame(ass_rule)
    再createDataFrame一下就变回dataframe

dataframe和RDD的转换

在这里插入图片描述
自动对每行数据进行处理并保留原始其他字段

java.lang.IllegalStateException: Input row doesn't have expected number of values required by the sc
好奇怪字符分裂为列表的时候,必须前面还有其他字段或者会报错
 part_data = spark.sql("select * from all_data where user_type= " + str(cus_type))

    part_data.show()
    all_record = part_data.select("user_type",'goods_huizong') 
                        # 可以选多个字段

    all_record = all_record.rdd.map(lambda row: (row['user_type],transform_to_list(row['goods_huizong']))) 
后面也可以选多个字段
  File "/usr/local/spark-2.4.5-bin-hadoop2.7/python/lib/pyspark.zip/pyspark/cloudpickle.py", line 875, in subimport
    __import__(name)
ModuleNotFoundError: No module named 'utils_'

与pyspark大数据相关的函数只能放在当前模块?通过其他模块导入
会不能识别?

20211231

 Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

资源被其他人占用了

20211230

Spark 2.0.x dump a csv file from a dataframe containing one array of type string
https://stackoverflow.com/questions/40426106/spark-2-0-x-dump-a-csv-file-from-a-dataframe-containing-one-array-of-type-string

from pyspark.sql.functions import udf
from pyspark.sql.types import StringType

def array_to_string(my_list):
    return '[' + ','.join([str(elem) for elem in my_list]) + ']'

array_to_string_udf = udf(array_to_string, StringType())

df = df.withColumn('column_as_str', array_to_string_udf(df["column_as_array"]))
df.drop("column_as_array").write.csv(...)
上面的方式有问题 生成的列里面的值是生成式

import org.apache.spark.sql.functions._
val dumpCSV = df.withColumn("ArrayOfString", assRule["ArrayOfString"].cast("string"))
                .write
                .csv(path="/home/me/saveDF")
这一种可以实现

https://www.jianshu.com/p/3735b5e2c540
https://www.jianshu.com/p/80964332b3c4
rdd或者sparkDataframe写入csv普通的pandas不能写入hdfs

import  findspark
findspark.init()
from pyspark import SparkConf
from pyspark.sql import SparkSession
from pyspark.ml.fpm import FPGrowth
import datetime
from pyspark.sql.types import StringType
from pyspark.sql.functions import udf
from tqdm import tqdm
import platform
import os
os.environ['JAVA_HOME']=r'/usr/local/jdk1.8.0_212'
t1 = datetime.datetime.now()
appname = "FPgrowth"
#master = "local[6]"

spark = SparkSession.Builder().appName(appname)\
    .config('spark.num-executors','50')\
    .config('spark.executor.memory','4g')\
    .config('spark.executor.cores','3')\
    .config('spark.driver.memory','1g')\
    .config('spark.default.parallelism','1000')\
    .config('spark.storage.memoryFraction','0.5')\
    .config('spark.shuffle.memoryFraction','0.3')\
    .config("spark.speculation",'True')\
    .config("spark.speculation.interval",'100')\
    .config("spark.speculation.quantile","0.75")\
    .config("spark.speculation.multiplier",'1.5')\
    .config("spark.scheduler.mode",'FAIR')\
    .getOrCreate()
df = spark.read.format("csv"). \
    option("header", "true") \
    .load("/data/tb_order_user_sec_type_group.csv")

df.createOrReplaceTempView('all_data')
sec_type=spark.sql("select sec_type from all_data ")

https://hub.mybinder.turing.ac.uk/user/apache-spark-sjqwupmp/notebooks/python/docs/source/getting_started/quickstart_ps.ipynb
Quickstart: Pandas API on Spark 快速开始基于pyspark的pandas

part_data=spark.sql("select * from all_data where sec_type= "+ cus_type)
part_data.count() # 统计RDD中的元素个数 行数
lines.first() # 这个RDD中的第一个元素,也就是README.md的第一行

http://spark.apache.org/docs/latest/api/python/getting_started/index.html
pyspark 官方文档 sparksql和sparkdataframe都参考官方文档

在这里插入图片描述
快速转化成pandas进行操作

20210831

Windows10:spark报错。WARN Utils: Service ‘SparkUI‘ could not bind on port 4040. Attempting port 4041.

https://blog.csdn.net/weixin_43748432/article/details/107378033

java.lang.OutOfMemoryError: GC overhead limit exceeded
https://blog.csdn.net/gaokao2011/article/details/51707163

调大下面的参数

Spark算子:RDD基本转换操作(5)–mapPartitions、
http://lxw1234.com/archives/2015/07/348.htm
以分区为单位来map而不是对每个元素单独map
提高效率

spark = SparkSession.Builder().appName(appname).master(master)\
    .config('spark.some.config.option0','some_value') \ 
    .config('spark.executor.memory','2g')\  #executor 内存设置
    .config('spark.executor.cores','2')\ #单个executor的可用的cpu核心数
    .config("spark.executor.instances",'10')\ #executor的总个数
    .config('spark.driver.memory','1g')\ # driver 的设置 要比 executor的小?
    .config('spark.default.parallelism','1000')\ #任务数的设置
    .config('spark.sql.shuffle.partitions','300')\  #分区数的设置
    .config("spark.driver.extraJavaOptions","-Xss2048M")\    #jvm相关设置 
    .config("spark.speculation",'True')\  # 避免卡在某个stage
    .config("spark.speculation.interval",'100')\ # 避免卡在某个stage
    .config("spark.speculation.quantile","0.1")\ # 避免卡在某个stage
    .config("spark.speculation.multiplier",'1')\   # 避免卡在某个stage
    .config("spark.scheduler.mode",'FAIR')\ # 调度方式
    .getOrCreate()
参数设置

spark = SparkSession.Builder().appName(appname).master(master)\
    .config('spark.some.config.option0','some_value') \
    .config('spark.executor.memory','2g')\
    .config('spark.executor.cores','2')\
    .config("spark.executor.instances",'10')\
 
    .config('spark.driver.memory','3g')\
#这个参数很重要    .config('spark.default.parallelism','1000')\
        #这个参数很重要   .config('spark.sql.shuffle.partitions','300')\

    .config("spark.driver.extraJavaOptions","-Xss3072M")\
        #这个参数很重要    
    .config("spark.speculation",'True')\
    .config("spark.speculation.interval",'100')\
    .config("spark.speculation.quantile","0.1")\
    .config("spark.speculation.multiplier",'1')\
    .config("spark.scheduler.mode",'FAIR')\
    .getOrCreate()

总共32gb内存 这个配置能很快的跑出结果

https://blog.csdn.net/lotusws/article/details/52423254
spark master local 参数

然后访问浏览器地址:http://192.168.1.116:4040
sparkui
spark面板地址

在这里插入图片描述
配置参数查看

在这里插入图片描述
正在跑的stage
pending 还没跑的stage
completed 完成的stage
12/69 13 一共69个 stage 已经跑了12个 13个正在跑
面板主要看stage 和 executor

在这里插入图片描述
时间线 从左到右

在这里插入图片描述
job 下面查看具体失败原因

https://blog.csdn.net/weixin_42340179/article/details/82415085
https://blog.csdn.net/whgyxy/article/details/88779965
在某个stage卡住
spark运行正常,某一个Stage卡住,停止不前异常分析

https://blog.csdn.net/yf_bit/article/details/93610829
重点
https://www.cnblogs.com/candlia/p/11920289.html
https://www.cnblogs.com/xiao02fang/p/13197877.html
影响spark性能的因素

https://www.csdn.net/tags/OtDaUgysMTk3Mi1ibG9n.html
https://www.cnblogs.com/yangsy0915/p/6060532.html
重点
pyspark 配置参数

https://www.javaroad.cn/questions/15705
按行循环

http://www.sofasofa.io/forum_main_post.php?postid=1005461
获取总行数和总列数

https://blog.csdn.net/qq_40006058/article/details/88822268
PySpark学习 | 常用的 68 个函数 | 解释 + python代码

https://blog.csdn.net/qq_29153321/article/details/88648948
RDD操作

https://www.jianshu.com/p/55efdcabd163
pyspark一些简单常用的函数方法

http://sofasofa.io/forum_main_post.php?postid=1002482
dataframe更改列名

  • 1
    点赞
  • 5
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值