💖💖作者:计算机毕业设计杰瑞
💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我!
💛💛想说的话:感谢大家的关注与支持!
💜💜
网站实战项目
安卓/小程序实战项目
大数据实战项目
深度学校实战项目
计算机毕业设计选题推荐
目录
基于大数据的豆瓣电影排行数据可视化分析系统介绍
基于大数据的豆瓣电影排行数据可视化分析系统是一个集数据采集、存储、分析和可视化展示于一体的综合性平台。该系统采用Hadoop分布式文件系统作为底层存储架构,利用Spark强大的内存计算能力对海量豆瓣电影数据进行深度挖掘和实时分析。系统后端基于Spring Boot框架构建,前端采用Vue+ElementUI+Echarts技术栈实现交互界面和数据可视化展示。通过系统的电影总览分析模块,用户可以全面了解电影行业的整体发展趋势;高产演员分析功能帮助识别影视行业的核心人才资源;评分投票关联分析深入探索观众偏好与电影质量之间的内在联系;地区产量分析揭示不同地区电影产业的发展水平;产量趋势分析预测行业未来走向。系统整合了Spark SQL进行复杂查询处理,结合Pandas和NumPy进行数据预处理,最终通过MySQL数据库实现结构化数据的持久化存储,为电影行业研究、投资决策和学术分析提供了强有力的数据支撑平台。
基于大数据的豆瓣电影排行数据可视化分析系统演示视频
【数据分析】基于大数据的豆瓣电影排行数据可视化分析系统 | 大数据可视化大屏 大数据毕设实战项目 选题推荐 文档指导 运行部署 Hadoop SPark
基于大数据的豆瓣电影排行数据可视化分析系统演示图片
基于大数据的豆瓣电影排行数据可视化分析系统代码展示
from pyspark.sql import SparkSession
from pyspark.sql.functions import col, avg, count, desc, asc, when, sum as spark_sum, round as spark_round
from pyspark.sql.types import StructType, StructField, StringType, DoubleType, IntegerType
import pandas as pd
import numpy as np
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
import json
import mysql.connector
# 电影数据大数据处理和分析核心功能
def movie_overview_analysis(request):
spark = SparkSession.builder.appName("MovieOverviewAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://localhost:9000/movie_data/douban_movies.csv")
df.createOrReplaceTempView("movies")
total_movies = df.count()
avg_rating = df.select(spark_round(avg("rating"), 2).alias("avg_rating")).collect()[0]["avg_rating"]
rating_distribution = spark.sql("SELECT CASE WHEN rating >= 9.0 THEN '优秀(9.0+)' WHEN rating >= 8.0 THEN '良好(8.0-8.9)' WHEN rating >= 7.0 THEN '一般(7.0-7.9)' WHEN rating >= 6.0 THEN '较差(6.0-6.9)' ELSE '很差(<6.0)' END as rating_level, COUNT(*) as count FROM movies WHERE rating IS NOT NULL GROUP BY rating_level ORDER BY rating_level").collect()
genre_stats = spark.sql("SELECT genre, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE genre IS NOT NULL GROUP BY genre ORDER BY movie_count DESC LIMIT 10").collect()
year_trend = spark.sql("SELECT year, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE year >= 2000 AND year <= 2023 GROUP BY year ORDER BY year").collect()
top_rated_movies = spark.sql("SELECT title, rating, director, year FROM movies WHERE rating IS NOT NULL ORDER BY rating DESC, vote_count DESC LIMIT 20").collect()
vote_rating_correlation = spark.sql("SELECT CASE WHEN vote_count >= 100000 THEN '高关注度(10万+)' WHEN vote_count >= 10000 THEN '中等关注度(1-10万)' WHEN vote_count >= 1000 THEN '低关注度(1千-1万)' ELSE '极低关注度(<1千)' END as vote_level, COUNT(*) as count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE vote_count IS NOT NULL GROUP BY vote_level ORDER BY vote_level").collect()
duration_analysis = spark.sql("SELECT CASE WHEN duration >= 180 THEN '超长片(3小时+)' WHEN duration >= 120 THEN '长片(2-3小时)' WHEN duration >= 90 THEN '标准片(1.5-2小时)' ELSE '短片(<1.5小时)' END as duration_type, COUNT(*) as count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE duration IS NOT NULL GROUP BY duration_type ORDER BY duration_type").collect()
monthly_distribution = spark.sql("SELECT release_month, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE release_month IS NOT NULL GROUP BY release_month ORDER BY release_month").collect()
spark.stop()
result_data = {"total_movies": total_movies, "avg_rating": float(avg_rating), "rating_distribution": [{"level": row["rating_level"], "count": row["count"]} for row in rating_distribution], "genre_stats": [{"genre": row["genre"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"])} for row in genre_stats], "year_trend": [{"year": row["year"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"])} for row in year_trend], "top_movies": [{"title": row["title"], "rating": float(row["rating"]), "director": row["director"], "year": row["year"]} for row in top_rated_movies], "vote_correlation": [{"level": row["vote_level"], "count": row["count"], "avg_rating": float(row["avg_rating"])} for row in vote_rating_correlation], "duration_analysis": [{"type": row["duration_type"], "count": row["count"], "avg_rating": float(row["avg_rating"])} for row in duration_analysis], "monthly_dist": [{"month": row["release_month"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"])} for row in monthly_distribution]}
return JsonResponse({"code": 200, "message": "电影总览分析完成", "data": result_data})
# 评分投票关联分析核心功能
def rating_vote_correlation_analysis(request):
spark = SparkSession.builder.appName("RatingVoteCorrelation").config("spark.sql.adaptive.enabled", "true").config("spark.serializer", "org.apache.spark.serializer.KryoSerializer").getOrCreate()
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://localhost:9000/movie_data/douban_movies.csv")
df.createOrReplaceTempView("movies")
vote_ranges = spark.sql("SELECT CASE WHEN vote_count >= 500000 THEN '50万+' WHEN vote_count >= 100000 THEN '10-50万' WHEN vote_count >= 50000 THEN '5-10万' WHEN vote_count >= 10000 THEN '1-5万' WHEN vote_count >= 1000 THEN '1千-1万' ELSE '<1千' END as vote_range, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating, ROUND(MAX(rating), 2) as max_rating, ROUND(MIN(rating), 2) as min_rating FROM movies WHERE vote_count IS NOT NULL AND rating IS NOT NULL GROUP BY vote_range ORDER BY movie_count DESC").collect()
rating_vote_scatter = spark.sql("SELECT rating, vote_count, title, year, genre FROM movies WHERE rating IS NOT NULL AND vote_count IS NOT NULL AND vote_count > 0 ORDER BY vote_count DESC LIMIT 1000").collect()
high_vote_low_rating = spark.sql("SELECT title, rating, vote_count, director, year FROM movies WHERE vote_count >= 50000 AND rating < 7.0 ORDER BY vote_count DESC LIMIT 15").collect()
low_vote_high_rating = spark.sql("SELECT title, rating, vote_count, director, year FROM movies WHERE vote_count < 5000 AND rating >= 8.5 ORDER BY rating DESC LIMIT 15").collect()
correlation_by_genre = spark.sql("SELECT genre, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating, ROUND(AVG(vote_count), 0) as avg_vote_count, ROUND(MAX(vote_count), 0) as max_vote_count FROM movies WHERE genre IS NOT NULL AND rating IS NOT NULL AND vote_count IS NOT NULL GROUP BY genre HAVING COUNT(*) >= 20 ORDER BY avg_vote_count DESC").collect()
vote_rating_matrix = spark.sql("SELECT vote_range, rating_range, COUNT(*) as count FROM (SELECT CASE WHEN vote_count >= 100000 THEN '高投票' WHEN vote_count >= 10000 THEN '中投票' ELSE '低投票' END as vote_range, CASE WHEN rating >= 8.0 THEN '高评分' WHEN rating >= 7.0 THEN '中评分' ELSE '低评分' END as rating_range FROM movies WHERE vote_count IS NOT NULL AND rating IS NOT NULL) GROUP BY vote_range, rating_range ORDER BY vote_range, rating_range").collect()
yearly_correlation = spark.sql("SELECT year, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating, ROUND(AVG(vote_count), 0) as avg_vote_count, ROUND(CORR(rating, LOG(vote_count + 1)), 3) as correlation_coeff FROM movies WHERE year >= 2000 AND year <= 2023 AND rating IS NOT NULL AND vote_count IS NOT NULL GROUP BY year ORDER BY year").collect()
outlier_analysis = spark.sql("SELECT title, rating, vote_count, ABS(rating - avg_rating_for_vote_range) as rating_deviation FROM (SELECT title, rating, vote_count, AVG(rating) OVER (PARTITION BY CASE WHEN vote_count >= 100000 THEN '高投票' WHEN vote_count >= 10000 THEN '中投票' ELSE '低投票' END) as avg_rating_for_vote_range FROM movies WHERE rating IS NOT NULL AND vote_count IS NOT NULL) WHERE ABS(rating - avg_rating_for_vote_range) > 1.5 ORDER BY rating_deviation DESC LIMIT 20").collect()
spark.stop()
analysis_result = {"vote_distribution": [{"range": row["vote_range"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"]), "max_rating": float(row["max_rating"]), "min_rating": float(row["min_rating"])} for row in vote_ranges], "scatter_data": [{"rating": float(row["rating"]), "vote_count": row["vote_count"], "title": row["title"], "year": row["year"], "genre": row["genre"]} for row in rating_vote_scatter], "high_vote_low_rating": [{"title": row["title"], "rating": float(row["rating"]), "vote_count": row["vote_count"], "director": row["director"], "year": row["year"]} for row in high_vote_low_rating], "low_vote_high_rating": [{"title": row["title"], "rating": float(row["rating"]), "vote_count": row["vote_count"], "director": row["director"], "year": row["year"]} for row in low_vote_high_rating], "genre_correlation": [{"genre": row["genre"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"]), "avg_vote": int(row["avg_vote_count"]), "max_vote": int(row["max_vote_count"])} for row in correlation_by_genre], "rating_vote_matrix": [{"vote_range": row["vote_range"], "rating_range": row["rating_range"], "count": row["count"]} for row in vote_rating_matrix], "yearly_trend": [{"year": row["year"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"]), "avg_vote": int(row["avg_vote_count"]), "correlation": float(row["correlation_coeff"]) if row["correlation_coeff"] else 0} for row in yearly_correlation], "outliers": [{"title": row["title"], "rating": float(row["rating"]), "vote_count": row["vote_count"], "deviation": float(row["rating_deviation"])} for row in outlier_analysis]}
return JsonResponse({"code": 200, "message": "评分投票关联分析完成", "data": analysis_result})
# 地区产量分析核心功能
def region_production_analysis(request):
spark = SparkSession.builder.appName("RegionProductionAnalysis").config("spark.sql.adaptive.enabled", "true").config("spark.sql.adaptive.coalescePartitions.enabled", "true").getOrCreate()
df = spark.read.format("csv").option("header", "true").option("inferSchema", "true").load("hdfs://localhost:9000/movie_data/douban_movies.csv")
df.createOrReplaceTempView("movies")
region_stats = spark.sql("SELECT country as region, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating, ROUND(AVG(vote_count), 0) as avg_vote_count, ROUND(MAX(rating), 2) as max_rating FROM movies WHERE country IS NOT NULL AND country != '' GROUP BY country ORDER BY movie_count DESC LIMIT 20").collect()
region_genre_analysis = spark.sql("SELECT country, genre, COUNT(*) as count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE country IS NOT NULL AND genre IS NOT NULL GROUP BY country, genre HAVING COUNT(*) >= 5 ORDER BY country, count DESC").collect()
region_year_trend = spark.sql("SELECT country, year, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE country IS NOT NULL AND year >= 2010 AND year <= 2023 GROUP BY country, year HAVING COUNT(*) >= 3 ORDER BY country, year").collect()
top_regions_by_quality = spark.sql("SELECT country, COUNT(*) as total_movies, SUM(CASE WHEN rating >= 8.0 THEN 1 ELSE 0 END) as high_rating_movies, ROUND(SUM(CASE WHEN rating >= 8.0 THEN 1 ELSE 0 END) * 100.0 / COUNT(*), 2) as high_rating_percentage, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE country IS NOT NULL GROUP BY country HAVING COUNT(*) >= 20 ORDER BY high_rating_percentage DESC LIMIT 15").collect()
region_budget_analysis = spark.sql("SELECT country, COUNT(*) as movie_count, ROUND(AVG(CASE WHEN duration >= 120 THEN 1 ELSE 0 END) * 100, 2) as long_movie_percentage, ROUND(AVG(vote_count), 0) as avg_popularity FROM movies WHERE country IS NOT NULL GROUP BY country HAVING COUNT(*) >= 10 ORDER BY avg_popularity DESC LIMIT 15").collect()
cross_region_collaboration = spark.sql("SELECT country, COUNT(DISTINCT director) as unique_directors, COUNT(*) as total_movies, ROUND(COUNT(DISTINCT director) * 1.0 / COUNT(*), 3) as director_diversity_ratio FROM movies WHERE country IS NOT NULL AND director IS NOT NULL GROUP BY country HAVING COUNT(*) >= 15 ORDER BY director_diversity_ratio DESC LIMIT 12").collect()
region_market_performance = spark.sql("SELECT country, COUNT(*) as movie_count, ROUND(AVG(vote_count), 0) as avg_vote_count, SUM(vote_count) as total_votes, ROUND(AVG(rating), 2) as avg_rating, ROUND(MAX(vote_count), 0) as max_vote_count FROM movies WHERE country IS NOT NULL AND vote_count IS NOT NULL GROUP BY country HAVING COUNT(*) >= 10 ORDER BY total_votes DESC LIMIT 15").collect()
region_rating_distribution = spark.sql("SELECT country, SUM(CASE WHEN rating >= 9.0 THEN 1 ELSE 0 END) as excellent_count, SUM(CASE WHEN rating >= 8.0 AND rating < 9.0 THEN 1 ELSE 0 END) as good_count, SUM(CASE WHEN rating >= 7.0 AND rating < 8.0 THEN 1 ELSE 0 END) as average_count, SUM(CASE WHEN rating < 7.0 THEN 1 ELSE 0 END) as poor_count FROM movies WHERE country IS NOT NULL AND rating IS NOT NULL GROUP BY country HAVING COUNT(*) >= 20 ORDER BY excellent_count DESC LIMIT 10").collect()
emerging_regions = spark.sql("SELECT country, year, COUNT(*) as movie_count, ROUND(AVG(rating), 2) as avg_rating FROM movies WHERE country IS NOT NULL AND year >= 2018 GROUP BY country, year HAVING COUNT(*) >= 2 ORDER BY year DESC, movie_count DESC").collect()
spark.stop()
region_data = {"production_stats": [{"region": row["region"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"]), "avg_vote": int(row["avg_vote_count"]), "max_rating": float(row["max_rating"])} for row in region_stats], "genre_distribution": [{"country": row["country"], "genre": row["genre"], "count": row["count"], "avg_rating": float(row["avg_rating"])} for row in region_genre_analysis], "yearly_trends": [{"country": row["country"], "year": row["year"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"])} for row in region_year_trend], "quality_rankings": [{"country": row["country"], "total": row["total_movies"], "high_rating": row["high_rating_movies"], "percentage": float(row["high_rating_percentage"]), "avg_rating": float(row["avg_rating"])} for row in top_regions_by_quality], "market_analysis": [{"country": row["country"], "count": row["movie_count"], "long_movie_pct": float(row["long_movie_percentage"]), "popularity": int(row["avg_popularity"])} for row in region_budget_analysis], "collaboration_index": [{"country": row["country"], "directors": row["unique_directors"], "movies": row["total_movies"], "diversity": float(row["director_diversity_ratio"])} for row in cross_region_collaboration], "market_performance": [{"country": row["country"], "count": row["movie_count"], "avg_vote": int(row["avg_vote_count"]), "total_votes": row["total_votes"], "avg_rating": float(row["avg_rating"]), "max_vote": int(row["max_vote_count"])} for row in region_market_performance], "rating_distribution": [{"country": row["country"], "excellent": row["excellent_count"], "good": row["good_count"], "average": row["average_count"], "poor": row["poor_count"]} for row in region_rating_distribution], "emerging_markets": [{"country": row["country"], "year": row["year"], "count": row["movie_count"], "avg_rating": float(row["avg_rating"])} for row in emerging_regions]}
return JsonResponse({"code": 200, "message": "地区产量分析完成", "data": region_data})
基于大数据的豆瓣电影排行数据可视化分析系统文档展示
💖💖作者:计算机毕业设计杰瑞
💙💙个人简介:曾长期从事计算机专业培训教学,本人也热爱上课教学,语言擅长Java、微信小程序、Python、Golang、安卓Android等,开发项目包括大数据、深度学习、网站、小程序、安卓、算法。平常会做一些项目定制化开发、代码讲解、答辩教学、文档编写、也懂一些降重方面的技巧。平常喜欢分享一些自己开发中遇到的问题的解决办法,也喜欢交流技术,大家有技术代码这一块的问题可以问我!
💛💛想说的话:感谢大家的关注与支持!
💜💜
网站实战项目
安卓/小程序实战项目
大数据实战项目
深度学校实战项目
计算机毕业设计选题推荐