Recommender System using ALS in Pyspark

Recommender System using ALS in Pyspark

https://medium.com/@brunoborges_38708/recommender-system-using-als-in-pyspark-10329e1d1ee1

In the series of articles I wrote, a data lake was implemented that processes raw data through the source (Bronze), Landing (Silver) and Curated (Gold) zones. We did this using the AWS video game review database at this link as an example. Now, as the next step, we’ll use the curated base to implement a recommender system. With the successful implementation of the recommendation system, users of the AWS video game review platform will be able to enjoy a more personalized experience, discovering relevant new games while increasing engagement with the platform.

Recommendation Paradigms

There are several possible approaches to building a recommender system, but here we will explore two of the main techniques used: collaborative filtering and content-based filtering.

Collaborative Filtering

Collaborative filtering uses the collective behavior of users to make recommendations. It’s based on the idea that if two users have similar interests in the past, they are likely to share interests in future games. This method requires an extensive and dense dataset, which is often the case with video game review databases. In general, the process involves the following steps:

  • Rating matrix: Build a matrix where the lines represent the users, the columns represent the games and the values represent the ratings given by the users to the games.
  • Similarity between users: Calculate the similarity between users based on their ratings. There are several metrics to measure similarity, such as Euclidean distance or Pearson correlation.
  • Neighbor selection: Identify the users most similar to the target user based on the calculated similarity.
  • Generation of recommendations: Based on games rated by neighboring users, recommend games that have not yet been rated by the target user.

Collaborative filtering algorithms, such as ALS, SVD or even solutions using neural networks, create an embedding for each user and item, a vector that allows evaluating similarities, generating very assertive recommendations.

Content Based Filtering

Content-based filtering focuses on the characteristics of the games themselves and the stated interests of the users. It requires detailed information about the games, such as genre, theme, playstyle, graphics, and more. The steps to implement this approach are as follows:

  • Feature Extraction: Analyze and extract relevant information from game metadata to create a feature profile of each game.
  • User Profile: Build each user’s preference profile based on previous ratings and preferences.
  • Similarity between games: Calculate the similarity between games based on their features. This can be done using techniques such as cosine similarity or the Jaccard index.
  • Recommendations generation: Recommend games that share similar characteristics with the top rated games by the user.

在这里插入图片描述

It is recommended to use collaborative filtering when we have a large number of users and items, but the ratings provided by users are sparse. In such cases, it is difficult to obtain sufficient information about the characteristics of the items, and content-based filtering becomes less effective. When we have detailed information about the items available, content-based filtering can be very effective. It leverages this information to make personalized recommendations based on the characteristics that users have shown to like in the past. Another advantage is for new users: content-based filtering is useful when we have new users without a history of ratings. It can start making recommendations immediately based on the item features that are most relevant to users based on their stated preferences.

Many recommendation systems use hybrid approaches that combine elements of collaborative filtering and content-based filtering to achieve better recommendation results. Collaborative filtering and content-based filtering each have their own advantages and limitations, and the combination of the two can compensate for their weaknesses, providing more accurate and diverse recommendations.

Implicit and explicit recommendation

Recommendations can be classified into two main types based on how they are obtained from users’ interactions with the system: explicit recommendation and implicit recommendation.

  • Explicit Recommendation: In explicit recommendation, users provide ratings or direct feedback on items, explicitly indicating their preferences. This is the case with the AWS game base: users can assign a rating or stars after playing it, indicating their opinion about the work. It has the advantage of being easier to interpret, as it represents the direct opinions of users, but not all users provide explicit ratings for all items, which can lead to a sparse rating matrix.
  • Implicit Recommendation: In implicit recommendation, data is collected from user’s implicit behavior, such as clicks, purchases, time spent on pages, viewing history, among others. These behaviors can indicate the user’s interest in an item, even if there is no direct rating or feedback. For example, on a music streaming service, frequent clicks on certain songs or artists can be used to infer the user’s interest in those items.
    It has the advantage of less reliance on direct feedback and more abundant data, but implicit interactions may not provide enough detail about the user’s specific preferences, making recommendations less personalized.

Exploring the database

We will initially do an exploration of the AWS video game review base. The curated data used in this article can be downloaded from this link.

from pyspark.sql import SparkSession
from pyspark.sql.functions import col, expr, rank, countDistinct, count
from pyspark.sql.window import Window
from pyspark.ml.recommendation import ALS
from pyspark.ml.evaluation import RegressionEvaluator
from pyspark.ml.feature import StringIndexer
from pyspark.ml.tuning import TrainValidationSplit, ParamGridBuilder, CrossValidator
import pandas as pd

spark = SparkSession.builder.appName(
    'video_games_review').config("spark.driver.memory", "15g").getOrCreate()

df_landing = pd.read_parquet('video_games_review_landing.parquet')
spark_df_landing =  spark.createDataFrame(df_landing) 

spark_df_landing.show(vertical=True)

在这里插入图片描述

First row of dataframe

Basically, we need three columns to train our recommendation algorithm using collaborative filtering:

  • userid — user id
  • ItemId — Id of the item
  • rating — rating the user gave to when rating the item
df_rec = spark_df_landing.select('reviewerID', 'asin', 'overall').withColumnRenamed("reviewerID","userId")\
                                                                 .withColumnRenamed("asin","itemId")\
                                                                 .withColumnRenamed("overall","rating")
df_rec = df_rec.orderBy("userId", "itemId")

When training an algorithm and seeking more accurate performance evaluations, it can be interesting to limit the number of items, as many are rated infrequently, leading to imprecise recommendations. Here, we will only select the top 500 most popular items from the dataset. Additionally, making recommendations for users with only a few items rated would also be imprecise. Therefore, we will filter the dataset to include only users who have rated more than 5items.

popularity_df = df_rec.groupBy('itemId') \
                 .agg(count('*').alias('popularity')) \
                 .orderBy(col('popularity').desc())

# Select the top 500 most popular items
top_popular_items = popularity_df.limit(500)
df_rec_filtered = df_rec.join(top_popular_items, on='itemId', how='inner')

# Create a column with the count of items per user and filter the base to select 
# only users with 5 items or more
user_window = Window.partitionBy("userId").orderBy(col("itemId").desc())
df_rec_filtered = df_rec_filtered.withColumn("num_items", expr("count(*) over (partition by userId)"))
df_rec_filtered = df_rec_filtered.filter(col("num_items")>=5)

And here we have the number of items and users in our filtered base:

# Count the number of unique items
num_unique_items = df_rec_filtered.select('itemId').distinct().count()
print(f"Number of unique items: {num_unique_items}")

# Count the number of unique users
num_unique_users = df_rec_filtered.select('userId').distinct().count()
print(f"Number of unique users: {num_unique_users}")

在这里插入图片描述

import pandas as pd
import matplotlib.pyplot as plt

items_per_user = df_rec_filtered.groupBy('userId').count().select('count').toPandas()


# Plot the histogram
plt.hist(items_per_user['count'], bins=10, range=(1,15), edgecolor='black')
plt.xlabel('Number of Items')
plt.ylabel('Number of Users')
plt.title('Distribution of Number of Items per User')
plt.show()

在这里插入图片描述

Dataset Split for Recommendation Systems

When training recommendation algorithms, the traditional random split of data into training and testing sets may not be suitable. In the context of recommendation systems, we need to handle the unique challenge of ensuring that all users are present in both the training and testing datasets.

To address this issue, a common approach is to perform a user-level dataset split. Instead of randomly dividing the data at the individual data point level, we mask a certain percentage of items for each user and use these masked items as the testing set. Specifically, we randomly hide a portion (e.g., 20%) of the items that each user has interacted with in the training set.

This user-level split allows us to ensure that the testing set contains users with diverse preferences, which is critical for evaluating the performance of the recommendation system accurately. By using metrics such as precision and NDCG on the masked items, we can compare the predicted recommendations with the actual hidden items to assess the quality and effectiveness of the recommendation model. This method provides a more realistic evaluation of the recommendation system’s performance in a scenario where some user-item interactions are unknown during training but need to be predicted during testing.

# For example, 30% of items will be masked
percent_items_to_mask = 0.3 
# Determine the number of items to mask for each user
df_rec_final = df_rec_filtered.withColumn("num_items_to_mask", (col("num_items") * percent_items_to_mask).cast("int"))
# Masks items for each user
df_rec_final = df_rec_final.withColumn("item_rank", rank().over(user_window))

# Create a StringIndexer model to index the user ID column
indexer_user = StringIndexer(inputCol='userId', outputCol='userIndex').setHandleInvalid("keep")
indexer_item = StringIndexer(inputCol='itemId', outputCol='itemIndex').setHandleInvalid("keep")

# Fit the indexer model to the data and transform the DataFrame
df_rec_final = indexer_user.fit(df_rec_final).transform(df_rec_final)
df_rec_final = indexer_item.fit(df_rec_final).transform(df_rec_final)

# Convert the userIndex column to integer type
df_rec_final = df_rec_final.withColumn('userIndex', df_rec_final['userIndex'].cast('integer'))\
               .withColumn('itemIndex', df_rec_final['itemIndex'].cast('integer'))

train_df_rec = df_rec_final.filter(col("item_rank") > col("num_items_to_mask"))
test_df_rec = df_rec_final.filter(col("item_rank") <= col("num_items_to_mask"))

在这里插入图片描述

For each user, we selected randomly the itens that will be masked. The userIndex and itemIndex are the user and itens columns indexed, necessary to model an recommender system in spark.

Alternated Least Squares (ALS)

Spark has implemented ALS, an algorithm widely used in the implementation of recommender systems based on Collaborative Filtering. It approaches the recommendation problem as a factorization matrix, where the ratings given by users to different items (games, in our case) are represented as a sparse matrix. Let’s understand step-by-step how this algorithm works:

Rating matrix
Initially, we built a matrix representing user ratings for the games. Each row of the matrix represents a user, each column represents a game, and the filled values represent the ratings given by users to the games (or “NaN” to indicate that the user has not rated a given game).

Matrix factorization
ALS approaches the ratings matrix as the product of two smaller matrices: a matrix of users and a matrix of items (games). This factorization aims to find the “latent vectors” that represent the hidden characteristics of users and items.

Suppose we have an R matrix of user ratings for items (games) where the rows represent the users, the columns represent the items, and the values represent the ratings given by the users to the items (or “NaN” to indicate that the user did not rate the item). The objective is to find two smaller matrices, P and Q, whose product is as close as possible to the original matrix R, that is, to find the predictions 𝑟̂_ij that are as close as possible to real valuations 𝑟_𝑖𝑗 of the matrix R. The loss function that we want to minimize is given by the sum of squares of the errors between the predictions and the actual evaluations:

在这里插入图片描述

where Ω is the set of index pairs (i, j) corresponding to known evaluations in the matrix R.

在这里插入图片描述

Initialization

First, we randomly initialize the P and Q matrices with numerical values (usually small) for the latent vectors (or latent factors) of users and items, respectively. The number of latent factors is a hyperparameter of the algorithm and needs to be defined in advance.

Least squares alternation
The name “Alternating Least Squares” comes from the approach of minimizing alternating squared errors. Initially, we fixed the items matrix (Q) and optimized the users matrix (P) to reduce the reconstruction error of the original evaluations. We then fixed the array of users and optimized the array of items to reduce the error even further.

  • P optimization (fixed users):
    To optimize the matrix P, we fix the matrix Q and minimize the sum of squares of the errors between the real values of the matrix R and the values predicted by the multiplication of P and transposed Q. We used Gradient Descent to adjust the users’ latent vectors in the P matrix in order to reduce errors between predictions and actual evaluations. This is done by taking the partial derivative of the loss function with respect to 𝑝_𝑖𝑗 and equating to zero:

在这里插入图片描述

And now we solve for 𝑝_𝑖𝑘 for each k to optimize the matrix P.

  • Q optimization (fixed items):
    We then fix the P matrix and optimize the Q matrix using the same sum-of-squares error minimization process. We adjust the latent vectors of the items in the Q matrix so that the predictions of the transposed multiplication of P and Q approximate as closely as possible the actual evaluations of the matrix R. We take the partial derivative of the loss function with respect to 𝑞_𝑗𝑘 and set it equal to zero:

在这里插入图片描述

And then we solve for 𝑞_𝑗𝑘 for each k to optimize the Q matrix.

Convergence

The algorithm iterates through these two alternating steps until the user matrix and item matrix reach a convergence point, meaning that the predictions of ratings get as close as possible to the actual ratings given by the users.

Recommendation Generation

After obtaining the optimized user and item matrices, we can use them to make predictions about the ratings a user would give to a specific game. Based on these predictions, we can recommend the highest-rated games that the user has not yet experienced.

Creating a Baseline

One of the best baselines we can use to compare with our recommendation system is popularity-based. And this is quite simple to do: Just set the number of latent factors in the ALS algorithm to 1. Let’s understand why this happens:

The ALS model with only 1 latent factor means that the system is representing users and items in a one-dimensional space. This implies that the model is not learning complex latent features or relationships between users and items. Instead, it considers only one overall dimension that captures the average trend of the rating data.

In this case, the rating matrix is approximated by a matrix of outer products (i.e., only one outer product) of user and item vectors. As a result, the model simply weights user preferences and item features by the same scalar value, which simplifies the recommendation process significantly.

If all users have similar or identical ratings for a specific item, the ALS model with 1 latent factor will classify that item as popular, because the single latent factor will be equally applied to all users, without considering their individual preferences.

Regularization
In the loss function, Spark implements regularization to the ALS, to control overfitting and improve the overall performance of the recommender system. Regularization is a technique that penalizes high magnitude terms in the P and Q matrices during optimization, encouraging the latent vectors to have smaller values and thus preventing the model from overfitting the training data. There are two common types of regularization that can be added to ALS: L2 regularization (also known as Ridge regularization) and L1 regularization (also known as LASSO regularization). The difference between them is in the way the penalty is applied.

  • L2 Regularization (Ridge): The L2 regularization adds a term to the loss function that is proportional to the sum of squares of the elements of the matrices P and Q. The objective is to force the values of the latent vectors to be small. The loss function with L2 regularization is given by:

在这里插入图片描述

where 𝜆 it is the L2 regularization parameter that controls the strength of the penalty. The larger the value of 𝜆, the greater the penalty and the more strongly the latent vectors will be regularized.

  • L1 Regularization (LASSO): The L1 regularization adds a term to the loss function that is proportional to the sum of the absolute values of the elements of the P and Q matrices. Like the L2 regularization, the L1 regularization also encourages the latent vectors to have smaller values, but in a more sparse way. The loss function with L1 regularization is given by:

在这里插入图片描述

The larger the value of 𝜆, the more sparse the latent vectors become.

Using ALS with Spark
We configure the ALS model as follows:

  • userCol: Specifies the name of the column that contains the user indexes.
  • itemCol: Specifies the name of the column that contains the item indexes (games, in this case).
  • ratingCol: Specifies the name of the column that contains user ratings for the items.
  • coldStartStrategy: Specifies the strategy for handling new users or items during forecasting. In this case, “drop” indicates that new users or items will be dropped.
  • nonnegative: Indicates whether predictions should be restricted to non-negative values.

Then a hyperparameter grid is created for the ALS using the ParamGridBuilder class. Here, three hyperparameters are being specified:

  • rank: Specifies the number of latent factors (also called dimensions) of the ALS.
  • maxIter: Specifies the maximum number of iterations that ALS can run during training.
  • regParam: Specifies the regularization term that controls the strength of the penalty to avoid overfitting.

A regression evaluator is created to evaluate the performance of the ALS model. The evaluator will use the metric rmse (Root Mean Squared Error) to calculate the error between the actual evaluations and the model predictions. The CrossValidator object is created to perform cross validation. It uses the param_grid hyperparameter grid, the evaluator evaluator, and divides the data into 3 folds to evaluate model performance.

# Configure the ALS model
als = ALS(userCol='userIndex', itemCol='itemIndex', ratingCol='rating',
          coldStartStrategy='drop', nonnegative=True)


param_grid = ParamGridBuilder()\
             .addGrid(als.rank, [1, 20, 30])\
             .addGrid(als.maxIter, [20])\
             .addGrid(als.regParam, [.05, .15])\
             .build()
evaluator = RegressionEvaluator(metricName='rmse', labelCol='rating', predictionCol='prediction')

cv = CrossValidator(
        estimator=als,
        estimatorParamMaps=param_grid,
        evaluator=evaluator,
        numFolds=3)

model = cv.fit(train_df_rec)

best_model = model.bestModel
print('rank: ', best_model.rank)
print('MaxIter: ', best_model._java_obj.parent().getMaxIter())
print('RegParam: ', best_model._java_obj.parent().getRegParam())

在这里插入图片描述

Using RMSE as a metric, the best model has rank 1, which indicates that the baseline performed better.

# Train the model using the training data
model = als.fit(train_df_rec)

# Generate predictions on the test data
predictions = best_model.transform(test_df_rec)
predictions = predictions.withColumn("prediction", expr("CASE WHEN prediction < 1 THEN 1 WHEN prediction > 5 THEN 5 ELSE prediction END"))

evaluator = RegressionEvaluator(metricName='rmse', labelCol='rating', predictionCol='prediction')
rmse = evaluator.evaluate(predictions)
print(f'Root Mean Squared Error (RMSE): {rmse}')

在这里插入图片描述

With the trained model, we will generate the top 100 item recommendations for each user, filtering the items that are already in the training base

from pyspark.mllib.evaluation import RankingMetrics
from pyspark.sql.functions import col, collect_list

# Convert the predictions DataFrame to include all predictions per user
# Generate top-k recommendations for each user
userRecs = best_model.recommendForAllUsers(100)  # Top-100 recommendations for each user

# Prepare the input for RankingMetrics
user_ground_truth = test_df_rec.groupby('userIndex').agg(collect_list('itemIndex').alias('ground_truth_items'))
user_train_items = train_df_rec.groupby('userIndex').agg(collect_list('itemIndex').alias('train_items'))

# Join the recommendations and ground truth data on the user ID
user_eval = userRecs.join(user_ground_truth, on='userIndex').join(user_train_items, on='userIndex') \
    .select('userIndex', 'recommendations.itemIndex', 'ground_truth_items', 'train_items', 'recommendations.rating')
user_eval = user_eval.toPandas()
user_eval['itemIndex_filtered'] = user_eval.apply(lambda x:[b for (b,z) in zip(x.itemIndex, x.rating) if b not in x.train_items], axis=1)
user_eval['rating_filtered'] = user_eval.apply(lambda x:[z for (b,z) in zip(x.itemIndex, x.rating) if b not in x.train_items], axis=1)

Evaluation metrics

There are several metrics that we can use to evaluate recommender systems. These metrics measure different aspects of system performance and are useful for understanding how recommendations compare to users’ real preferences and interests.

  • Precision@k is a metric that measures the proportion of relevant items that were recommended in the first k items of the recommendation list. It focuses on the hit rate of the recommendations in relation to the total number of recommended items. The formula for calculating precision@k is as follows:

在这里插入图片描述

Here, “relevant items” are the items that the user interacted with positively or rated highly. Therefore, precision@k measures the ability of the recommender system to provide relevant recommendations on the first k items in the list.

  • Recall@k is a metric that measures the proportion of relevant items that were recommended against the total number of relevant items in the database. Unlike precision@k, recall@k focuses on the recall rate of relevant items among all available relevant items. The formula for calculating the recall@k is as follows:

在这里插入图片描述

The recall@k is important to assess whether the recommender system is being able to retrieve all relevant items in your recommendation list, regardless of order.

  • NDCG (Normalized Discounted Cumulative Gain): The NDCG is a metric that takes into account the relevance of recommended items and the position in which they were recommended. It values relevant recommendations that are closer to the top of the recommendation list, assigning higher scores to relevant items in higher positions. The formula for calculating the NDCG is as follows:

在这里插入图片描述

where DCG@k (Discounted Cumulative Gain) is calculated as:

在这里插入图片描述

and IDCG@k (Ideal Discounted Cumulative Gain) is the ideal DCG@k value, obtained when all k recommended items are relevant.

The NDCG@k varies between 0 and 1, where 1 indicates that all recommendations are perfectly relevant and are positioned at the top of the list.

We use each of these metrics, we will use the masked items from the test base as relevant items. If the system recommended 2 items that were in the user test base in the top 5 positions, your precision@5 is 2/5. If that user had 7 masked items, the recall@5 is 2/7. If a recommendation system has placed these items in first and second place and another system in fourth and fifth, the precision@k will be the same, but the NDCG takes into account the position of the items, penalizing lower positions. Therefore, the NDCG@5 for the first recommender system is approximately 1.00, indicating that all relevant recommendations are positioned in the best positions. The NDCG@5 for the second recommendation system is approximately 0.55, indicating that the relevant recommendations are positioned below what the ideal system could provide. An NDCG@5 of 1.00 is the ideal value, indicating that all relevant recommendations are in the top positions and have high relevance.

Generally speaking, we will evaluate the recommender system using the average NDCG, which averages the NDCG for all users in the test set. and the Mean Average Precision (MAP). The MAP is a summary measure of precision at different cutoff points (k) in the recommendation list. To calculate the MAP, it is first necessary to calculate the Average Precision for each user.

Average Precision is a metric that considers the relevance of recommended items along the recommendation list. It measures the proportion of relevant items found in each recommendation list position up to item k, and these proportions are averaged for each user. Then the MAP is calculated as the average of the Mean Precision for all users in the test set.

A python implementation is shown below:

import numpy as np
import math
def score(predicted, actual, metric):
        """
        Parameters
        ----------
        predicted : List
            List of predicted apps.
        actual : List
            List of masked apps.
        metric : 'precision' or 'ndcg'
            A valid metric for recommendation.
        Raises
        -----
        Returns
        -------
        m : float
            score.
        """
        valid_metrics = ['precision', 'ndcg']
        if metric not in valid_metrics:
            raise Exception(f"Choose one valid baseline in the list: {valid_metrics}")
        if metric == 'precision':
            m = np.mean([float(len(set(predicted[:k]) 
                                               & set(actual))) / float(k) 
                                     for k in range(1,len(actual)+1)])
        if metric == 'ndcg':
            v = [1 if i in actual else 0 for i in predicted]
            v_2 = [1 for i in actual]
            dcg = sum([(2**i-1)/math.log(k+2,2) for (k,i) in enumerate(v)])
            idcg = sum([(2**i-1)/math.log(k+2,2) for (k,i) in enumerate(v_2)])
            m = dcg/idcg
        return m

user_eval['precision'] = user_eval.apply(lambda x: score(x.itemIndex_filtered, x.ground_truth_items, 'precision'), axis=1)
user_eval['NDCG'] = user_eval.apply(lambda x: score(x.itemIndex_filtered, x.ground_truth_items, 'ndcg'), axis=1)

MAP = user_eval.precision.mean()
avg_NDCG = user_eval.NDCG.mean()

在这里插入图片描述

These results highlight the trade-off between model performance based on different evaluation metrics. While the model with rank 1 (baseline) achieved better accuracy in predicting user ratings (lower RMSE), the model with rank 30 provided more relevant and higher-quality recommendations (higher NDCG) to users. The choice between the two models would depend on the specific priorities and requirements of the recommendation system, balancing accuracy in rating prediction with the ability to deliver personalized and relevant recommendations to users.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。1、资源项目源码均已通过严格测试验证,保证能够正常运行; 2、项目问题、技术讨论,可以给博主私信或留言,博主看到后会第一时间与您进行沟通; 3、本项目比较适合计算机领域相关的毕业设计课题、课程作业等使用,尤其对于人工智能、计算机科学与技术等相关专业,更为适合; 4、下载使用后,可先查看README.md文件(如有),本项目仅用作交流学习参考,请切勿用于商业用途。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值