pyspark

pyspark 的相关函数

PySpark SQL provides several built-in standard functions pyspark.sql.functions to work with DataFrame and SQL queries. All these PySpark Functions return pyspark.sql.Column type.

get_json_object
  /**
   * Extracts json object from a json string based on json path specified, and returns json string
   * of the extracted json object. It will return null if the input json string is invalid.
   *
   * @group json_funcs
   * @since 1.6.0
   */
  def get_json_object(e: Column, path: String): Column =
    Column.fn("get_json_object", e, lit(path))

以下可以看出 python 的函数实现,复用的是jvm 的那套实现


@_try_remote_functions
def get_json_object(col: "ColumnOrName", path: str) -> Column:
    """
    Extracts json object from a json string based on json `path` specified, and returns json string
    of the extracted json object. It will return null if the input json string is invalid.

    .. versionadded:: 1.6.0

    .. versionchanged:: 3.4.0
        Supports Spark Connect.

    Parameters
    ----------
    col : :class:`~pyspark.sql.Column` or str
        string column in json format
    path : str
        path to the json object to extract

    Returns
    -------
    :class:`~pyspark.sql.Column`
        string representation of given JSON object value.

    Examples
    --------
    >>> data = [("1", '''{"f1": "value1", "f2": "value2"}'''), ("2", '''{"f1": "value12"}''')]
    >>> df = spark.createDataFrame(data, ("key", "jstring"))
    >>> df.select(df.key, get_json_object(df.jstring, '$.f1').alias("c0"), \\
    ...                   get_json_object(df.jstring, '$.f2').alias("c1") ).collect()
    [Row(key='1', c0='value1', c1='value2'), Row(key='2', c0='value12', c1=None)]
    """
    from pyspark.sql.classic.column import _to_java_column

    return _invoke_function("get_json_object", _to_java_column(col), path)


   def _invoke_function(name: str, *args: Any) -> Column:
    """
    Invokes JVM function identified by name with args
    and wraps the result with :class:`~pyspark.sql.Column`.
    """
    from pyspark import SparkContext

    assert SparkContext._active_spark_context is not None
    jf = _get_jvm_function(name, SparkContext._active_spark_context)
    return Column(jf(*args))
def _get_jvm_function(name: str, sc: "SparkContext") -> Callable:
    """
    Retrieves JVM function identified by name from
    Java gateway associated with sc.
    """
    assert sc._jvm is not None
    return getattr(getattr(sc._jvm, "org.apache.spark.sql.functions"), name)
def _to_java_column(col: "ColumnOrName") -> "JavaObject":
    if isinstance(col, Column):
        jcol = col._jc
    elif isinstance(col, str):
        jcol = _create_column_from_name(col)
    else:
        raise PySparkTypeError(
            error_class="NOT_COLUMN_OR_STR",
            message_parameters={"arg_name": "col", "arg_type": type(col).__name__},
        )
    return jcol

FQA

ModuleNotFoundError: No module named 'numpy'

遇到该问题, 先看看 pyspark 使用的 python 版本

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /__ / .__/\_,_/_/ /_/\_\   version 3.3.2
      /_/

Using Python version 3.9.9 (main, Mar  3 2024 19:54:45)
Spark context Web UI available at http://172.17.48.107:4040
Spark context available as 'sc' (master = local[10], app id = local-1719386128955).
SparkSession available as 'spark'.

然后使用响应的 python 版本 pip去安装 numpy, 否则安装了也 识别不了

pip3.9 install numpy

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值