pyspark.sql.utils.AnalysisException: Table or view not found: df; line 1 pos 52;

pyspark.sql.utils.AnalysisException: Table or view not found: df; line 1 pos 52;


原代码:

import pandas as pd
from pandas import DataFrame
from pyspark.sql import SparkSession

df = DataFrame(pd.read_csv('Tracks.csv',index_col=0))
df.columns=['CustomerID', 'TrackID', 'Date', 'Mobile', 'ZIP']
spark = SparkSession \
        .builder \
        .appName("Python Spark SQL Hive integration example") \
        .getOrCreate()
df = spark.createDataFrame(df)
df.show()
train_data=df.select('CustomerID','TrackID').distinct()
spark.sql("select CustomerID, TrackID, count(*) as rating from df \
           group by CustomerID, TrackID \
           order by CustomerID, count(*) desc").show(8)

报错:

Traceback (most recent call last):
  File "C:/Users/Administrator/Desktop/tencent实习/musicclassification/Classify.py", line 19, in <module>
    order by CustomerID, count(*) desc").show(8)
  File "D:\anaconda\envs\tensorflow2.0\lib\site-packages\pyspark\sql\session.py", line 646, in sql
    return DataFrame(self._jsparkSession.sql(sqlQuery), self._wrapped)
  File "D:\anaconda\envs\tensorflow2.0\lib\site-packages\py4j\java_gateway.py", line 1305, in __call__
    answer, self.gateway_client, self.target_id, self.name)
  File "D:\anaconda\envs\tensorflow2.0\lib\site-packages\pyspark\sql\utils.py", line 137, in deco
    raise_from(converted)
  File "<string>", line 3, in raise_from
pyspark.sql.utils.AnalysisException: Table or view not found: df; line 1 pos 52;
'Sort ['CustomerID ASC NULLS FIRST, count(1) DESC NULLS LAST], true
+- 'Aggregate ['CustomerID, 'TrackID], ['CustomerID, 'TrackID, count(1) AS rating#33L]
   +- 'UnresolvedRelation [df]

查询了一下网络上的其他相同错误解决办法,全部都与HIVE相关,为此我花了一晚上去装HIVE还没装上,主要是hive-site.xml那里每个网站都有不同的配置方法,而我对java一无所知导致弄了半天都初始化不了…近乎绝望的时候看到这样一篇文章https://blog.csdn.net/u012501054/article/details/85251779,里面有提到dataframe需要先注册成是一个Table或者View才能继续用SQL语言处理。在这里插入图片描述
为此修改代码:

import pandas as pd
from pandas import DataFrame
from pyspark.sql import SparkSession

df = DataFrame(pd.read_csv('Tracks.csv',index_col=0))
df.columns=['CustomerID', 'TrackID', 'Date', 'Mobile', 'ZIP']
spark = SparkSession \
        .builder \
        .appName("Python Spark SQL Hive integration example") \
        .getOrCreate()
df = spark.createDataFrame(df)
df.show()
train_data=df.select('CustomerID','TrackID').distinct()
df.createOrReplaceTempView("df")
spark.sql("select CustomerID, TrackID, count(*) as rating from df \
           group by CustomerID, TrackID \
           order by CustomerID, count(*) desc").show(8)

问题解决,运行正常:

+----------+-------+------+
|CustomerID|TrackID|rating|
+----------+-------+------+
|         0|      0|    93|
|         0|      1|    55|
|         0|      2|    53|
|         0|      4|    45|
|         0|      3|    41|
|         0|      5|    39|
|         0|      8|    31|
|         0|      6|    31|
+----------+-------+------+
only showing top 8 rows


Process finished with exit code 0
  • 3
    点赞
  • 2
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值