spotify下载_我的Spotify推荐系统之旅

spotify下载

There’s one particular event that cheers my Mondays. No that’s not having classes or work — I’m talking about the amazing Spotify ‘Weekly Discovery’ Playlist that is updated. It’s amazing the combination of songs that you never heard but still love.

有一个特别的事件使我的星期一高兴。 不,那不是上课或上课—我说的是令人惊叹的Spotify “每周发现”播放列表,该列表已更新。 您从未听过但仍然喜欢的歌曲的组合真是太神奇了。

Image for post

There are several articles online explaining the AI behind this beloved Weekly Discovery technique used by Spotify. Here’s one of my favourites:

在线上有几篇文章解释了Spotify使用的备受喜爱的每周发现技术背后的AI。 这是我的最爱之一:

Putting together my recent programming skills with my not so recent Spotify love and addiction, I tried to build my own recommendation system, as well as explore some fun Data Analysis along the way.

我将最近的编程技能与最近的Spotify的爱和上瘾结合在一起,试图建立自己的推荐系统,并在此过程中探索一些有趣的数据分析。

Here is my Project git folder in case you’d like to check the code. It is divided into several notebooks.

如果您想检查代码, 是我的Project git文件夹。 它分为几个笔记本。

设置Spotify API (Setting up the Spotify API)

The witchery starts with the use of Spotify API to create an app within the Spotify developer environment. After creating a developer’s account and an application environment I’m able to access data on any public playlist out there. I followed the below tutorial.

巫术始于使用Spotify API在Spotify开发人员环境中创建应用程序。 创建开发者帐户和应用程序环境后,我可以访问那里任何公共播放列表中的数据。 我遵循以下教程。

https://machinelearningknowledge.ai/tutorial-how-to-use-spotipy-api-to-scrape-spotify-data/

https://machinelearningknowledge.ai/tutorial-how-to-use-spotipy-api-to-scrape-spotify-data/

My first access to the API involved getting two playlists: liked and disliked songs. (Spoiler alert: these were then used for supervised learning)

我对API的首次访问涉及获得两个播放列表:喜欢和不喜欢的歌曲。 (剧透警报:然后将其用于监督学习)

I build a master function to be able to convert any playlist into a dataframe:

我构建了一个主函数,可以将任何播放列表转换为数据帧:

def master_function(uri):
uri = playlist_uri
username = uri.split(':')[2]
playlist_id = uri.split(':')[4]
results = {'items':[]}for n in range(0,3000,100):
new = sp.user_playlist_tracks(username, playlist_id, offset = n)
results['items'] += new['items']
playlist_tracks_data = results
playlist_tracks_id = []
playlist_tracks_titles = []
playlist_tracks_artists = []for track in playlist_tracks_data['items']:
playlist_tracks_id.append(track['track']['id'])
playlist_tracks_titles.append(track['track']['name'])#adds a list of all artists involved in the song to the list of artists for the playlistfor artist in track['track']['artists']:
artist_list = []
artist_list.append(artist['name'])
playlist_tracks_artists.append(artist_list[0])
df = pd.DataFrame([])for i in range(0, len(playlist_tracks_id)):
features = sp.audio_features(playlist_tracks_id[i])
features_df = pd.DataFrame(features)
df = df.append(features_df)
df['title'] = playlist_tracks_titles#features_df['first_artist'] = playlist_tracks_first_artists
df['main_artist'] = playlist_tracks_artists#features_df = features_df.set_index('id')
df = df[['id', 'title', 'main_artist',
'danceability', 'energy', 'key', 'loudness',
'mode', 'acousticness', 'instrumentalness',
'liveness', 'valence', 'tempo',
'duration_ms', 'time_signature']]return df

探索性数据分析 (Exploratory Data Analytics)

Luckily for us, Spotify provides us with a way to do that — the Audio Feature Object aka Features!!

对我们来说幸运的是,Spotify为我们提供了一种方法-音频特征对象又称特征!

Let’s take a look at each feature.

让我们看一下每个功能。

速度。 简而言之,每首歌曲每分钟有多少次节拍(BPM)? (TEMPO. Simply put, how many beats per minute (BPM) does each song have?)

Tempos are also related to different Genres: Hip Hop 85–95 BPM, Techno 120–125 BPM, House & Pop 115–130 BPM, Electro 128 BPM, Reggaeton >130 BPM, Dubstep 140 BPM

节奏也与不同的流派有关:嘻哈85–95 BPM,Techno 120–125 BPM,House&Pop 115–130 BPM,Electro 128 BPM,雷鬼舞> 130 BPM,Dubstep 140 BPM

Image for post
0-Disliked; 1- Liked Songs
0-不喜欢; 1-喜欢的歌曲

能源。 强度和活动的量度。 (ENERGY. A measure of intensity and activity.)

  • This is the first of Spotify’s more subjective metrics.

    这是Spotify更加主观的指标之一。
  • Energy represents a perceptual measure of intensity and activity.

    能量代表对强度和活动的感知度量。
  • Typically, energetic tracks feel fast, loud, and noisy. For example, death metal has high energy, while a Bach prelude scores low on the scale.

    通常,充满活力的曲目会感觉快速,响亮且嘈杂。 例如,死亡金属具有较高的能量,而巴赫前奏的得分则较低。
Image for post

What artists are driving my Energy taste down?

哪些艺术家正在压低我的能量爱好?

Image for post
Thank’s Nick fka Chet Faker
谢谢尼克·卡(Nick fka)

可跳舞性。 (DANCEABILITY.)

Danceability describes how suitable a track is for dancing based on a combination of musical elements including tempo, rhythm stability, beat strength, and overall regularity.

舞蹈性是根据节奏,节奏稳定性,拍子强度和整体规律性等音乐元素的组合来描述轨道对舞蹈的适应性。

Image for post

Looks like while my ‘disliked’ songs follow a skewed distribution towards higher levels of danceability, my loved songs follow a supernormal distribution on this feature showing that I enjoy a wide range of danceability level.

看起来,虽然我的“不喜欢”的歌曲遵循偏高的可跳舞性分布,但我喜欢的歌曲在此功能上遵循超正态分布,这表明我喜欢各种各样的可跳舞性。

Image for post
Checking what artists are driving the lower levels of danceability.
检查哪些艺术家正在推动较低的舞蹈水平。
Image for post
we found the guilty one.
我们发现有罪。

能源与可塑性 (ENERGY vs DANCEABILITY)

Image for post
From the graph, we can see that I do enjoy songs with a normal level of danceability but low energy the image shows two very distinct clusters 从图中可以看出,我确实喜欢正常舞蹈水平但低能量的歌曲,图像显示了两个截然不同的簇

What is driving High danceability but low Energy?

是什么驱动着高舞蹈性但低能量?

Image for post
LO-FI music! Makes sense — I love this type of music.
LO-FI音乐! 很有道理-我喜欢这种音乐。

键。 (KEY.)

The estimated overall key of the track. Integers map to pitches using standard Pitch Class notation. E.g. 0 = C, 1 = C♯/D♭, 2 = D, and so on. If no key was detected, the value is -1.

曲目的估计总键 。 整数使用标准音高类别符号映射到音高。 例如0 = C,1 =C♯/ D♭,2 = D,依此类推。 如果未检测到键,则值为-1。

Image for post
I mapped the keys to have the real notation.
我将键映射为具有真正的符号。
Image for post
We can see key A is used more often both on liked and disliked songs.
我们可以看到,在喜欢和不喜欢的歌曲上都更经常使用键A。

声音。 (ACOUSTICNESS.)

A confidence measure from 0.0 to 1.0 of whether the track is acoustic. 1.0 represents high confidence the track is acoustic.

轨道是否发声的置信度从0.0到1.0。 1.0表示音轨是声学的高置信度。

Image for post
Very skewed towards more acoustic songs.
非常偏向于更多原声歌曲。

价。 (VALENCE.)

This is one of the most interesting metrics that Spotify produces: A measure describing the musical positiveness conveyed by a track.

这是Spotify产生的最有趣的指标之一:一种描述曲目传达的音乐积极性的指标。

  • Tracks with high valence sound more positive (e.g. happy, cheerful, euphoric)

    价高的曲目听起来更积极(例如,快乐,开朗,欣快)
  • Tracks with low valence sound more negative (e.g. sad, depressed, angry).

    价低的音轨听起来更消极(例如,悲伤,沮丧,愤怒)。
Image for post
Am I a sad person? No, its just LDR again.
我难过吗? 不,它只是LDR。

响度。 轨道的整体响度,以分贝(dB)为单位。 (LOUDNESS. The overall loudness of a track in decibels (dB).)

Loudness values are averaged across the entire track and are useful for comparing relative loudness of tracks. Loudness is the quality of a sound that is the primary psychological correlate of physical strength (amplitude). Values typical range between -60 and 0 db.

响度值是整个轨道的平均值,可用于比较轨道的相对响度。 响度是声音的质量,它是身体力量(振幅)的主要心理关联。 值的典型范围是-60至0 db。

Fyi. Did you know Spotify adjusts for loudness?

您知道Spotify会调整响度吗?

监督学习 (Supervised Learning)

You can find the full code here.

您可以在此处找到完整的代码。

The first step is to split our data into a training and testing set. I used sklearn function called train_test_split() which splits the data according to a test_size percent specified in the method. The code below breaks up the data into 85% train, 10% test.

第一步是将我们的数据分为训练和测试集。 我使用了一个名为train_test_split()的sklearn函数,该函数根据方法中指定的test_size百分比拆分数据。 下面的代码将数据分为85%的训练,10%的测试。

from sklearn.model_selection import train_test_splitfeatures = ['danceability', 'energy', 'key','loudness', 'mode', 'acousticness', 'instrumentalness', 'liveness','valence', 'tempo', 'duration_ms', 'time_signature']X = data[features]
y = data['Like']
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0, test_size=0.1)

逻辑回归 (Logistic Regression)

The first model that I tried was logistic regression. I got 80% accuracy.

我尝试的第一个模型是逻辑回归。 我的准确率达到了80%。

from sklearn.linear_model import LogisticRegressionlr_model = LogisticRegression()
lr_model.fit(X_train, y_train)lr_pred = lr_model.predict(X_test)
score = metrics.accuracy_score(y_test, lr_pred)*100
print("Accuracy using Logistic Regression: ", round(score, 3), "%")

决策树分类器 (Decision Tree Classifier)

A decision tree classifier is often the easiest to visualize. All it is is pretty much a decision tree based on the features so you can trace the path down and visually see how it makes decisions.

决策树分类器通常最容易可视化。 所有这些几乎都是基于这些功能的决策树,因此您可以追溯路径并直观地查看其如何做出决策。

model = DecisionTreeClassifier(max_depth = 8)
model.fit(X_train, y_train)
score = model.score(X_test,y_test)*100
print("Accuracy using Decision Tree: ", round(score, 2), "%")#visualize the tree
export_graphviz(model, out_file="tree.dot", class_names=["malignant", "benign"],
feature_names = data[features].columns, impurity=False, filled=True)
import graphviz
with open("tree.dot") as f:
dot_graph = f.read()
graphviz.Source(dot_graph)
Image for post
Tree visualization
树的可视化
#feature importance
def plot_feature_importances(model):
n_features = data[features].shape[1]
plt.barh(range(n_features), model.feature_importances_, align='center')
plt.yticks(np.arange(n_features), features)
plt.xlabel("Feature importance")
plt.ylabel("Feature")
plt.title("Spotify Features Importance - Decision Tree")
plot_feature_importances(model)
Image for post

K最近邻居(KNN) (K-Nearest Neighbors (KNN))

The K-Nearest Neighbors classifier looks at the neighbours of a data point in order to determine what the output is. This approach gave a slightly better accuracy than the previous classifiers.

K最近邻居分类器查看数据点的邻居,以确定输出是什么。 与以前的分类器相比,该方法的准确性略高。

knn = KNeighborsClassifier(n_neighbors = 2)
knn.fit(X_train, y_train)
knn_pred = c.predict(X_test)
score_test_knn = accuracy_score(y_test, knn_pred) * 100
print("Accuracy using Knn Tree: ", round(score_test_knn, 3), "%")

随机森林 (Random Forest)

The random forest is a model made up of many decision trees. Rather than just simply averaging the prediction of trees (which we could call a “forest”), this model uses sampling and random subsets to build trees and split nodes, respectively. Accuracy of 87%!

随机森林是由许多决策树组成的模型。 该模型不仅仅是简单地对树木的预测取平均值(我们可以称其为“森林”),而是使用采样和随机子集分别构建树木和分割节点。 准确度达87%!

model=RandomForestClassifier(n_estimators=100)
model = clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
score = metrics.accuracy_score(y_test, y_pred)*100
print("Accuracy with Random Forest:", round(score, 4), "%")

检查结果 (Check Results)

After checking within my test datasets which songs had been incorrectly classified, I decided to compare against external playlists.

在测试数据集中检查了哪些歌曲被错误分类之后,我决定与外部播放列表进行比较。

To do so, I used the best classifier (Random Forest) with predict_proba instead of predicting only. This allowed having a probability as dependent variable instead of a simple binary output.

为此,我将最佳分类器(Random Forest)与预测_proba结合使用,而不是仅进行预测。 这允许将概率作为因变量而不是简单的二进制输出。

pred = clf.predict_proba(felipe_df[features])[:,1]
felipe_df['prediction'] = pred
print("How similar is it to my taste?", round(felipe_df['prediction'].mean()*100,3), "%")
Image for post

无监督学习 (Unsupervised Learning)

Recall that in supervised machine learning you have input variables (X) and an output variable (Y ) and you use an algorithm to learn the mapping function from the input to the output. In contrast, in unsupervised machine learning, you only have input data (X) and no corresponding output variables.

回想一下,在有监督的机器学习中,您具有输入变量(X)和输出变量(Y),并且使用算法来学习从输入到输出的映射函数。 相反,在无监督机器学习中,您只有输入数据(X),而没有相应的输出变量。

The goal here is to extract knowledge based on any patterns we’ll be able to find — no labels will be addressed on liked and disliked. Unsupervised learning problems can be further grouped into clustering and association problems.

此处 的目标 是根据我们能够找到的任何模式来提取知识,不会在喜欢和不喜欢的内容上加上任何标签。 无监督学习问题可以进一步分为聚类和关联问题。

PCA:主成分分析 (PCA: Principal Component Analysis)

Principal Component Analysis (PCA) emphasizes variation and brings out strong patterns in a dataset. In other words, it takes all the variables then represents it in a smaller space while keeping the nature of the original data as much as possible.

主成分分析(PCA)强调变异并在数据集中显示出强大的模式。 换句话说,它使用所有变量然后在较小的空间中表示它,同时尽可能保留原始数据的性质。

  • The first principal component will encompass as much of the dataset variation as possible in 1 dimension,

    第一个主成分将在1维上包含尽可能多的数据集变化,
  • The second component will encompass as much as possible of the remaining variation as possible while remaining orthogonal to the first, and so on

    第二个分量将包含尽可能多的剩余变化,同时保持与第一个正交,依此类推
from sklearn.decomposition import PCApca = PCA(n_components=3, random_state=42)
df_pca = pd.DataFrame(data=pca.fit_transform(features_scaled), columns=[‘PC1’,’PC2',’PC3'])plt.matshow(pca.components_, cmap='viridis')
plt.yticks([0, 1, 2], ["First component", "Second component", "Third component"])
plt.colorbar()
plt.xticks(range(len(data[features].columns)),data[features], rotation=60, ha='left')
plt.xlabel("Feature")
plt.ylabel("Principal components")
Image for post
PCA with 3 components
具有3个组件的PCA

Plotting in 3D: I chose danceability for colour differentiation since, as per above, we can conclude that it is an important feature.

在3D中绘图:我选择舞蹈性来进行颜色区分,因为如上所述,我们可以得出结论,它是重要的功能。

# Plot the PCA
px.scatter_3d(df_pca_fix,
x='PC1',
y='PC2',
z='PC3',
title='Principal Component Analysis Projection (3-D)',
color='danceability',
size=np.ones(len(df_pca_fix)),
size_max=5,
height=600,
hover_name='title',
hover_data=['main_artist'],
color_continuous_scale=px.colors.cyclical.mygbm[:-6])
Image for post

We can see each song position and its distance to other songs based on the audio features that have been transformed. Most points are concentrated on the green-eish areas. The mapping also confirms that danceability does correlate with PC2 to some extent. ‘Am I boy? Am I a girl? Do I really care’ (fyi it’s a liked song) is on the opposite side to the way way less dance level with Hellowen’s “Hallowen”.

根据已转换的音频功能,我们可以看到每首歌曲的位置及其与其他歌曲的距离。 大多数问题都集中在绿色的区域。 该映射还确​​认了可跳舞性确实在某种程度上与PC2相关。 我是男孩吗? 我是女孩吗? “我真的在乎吗?” (这是一首喜欢的歌曲)与Hellowen的“万圣节”减少舞蹈水平的方式相反

用K均值聚类 (Clustering with K-Means)

The main idea behind k-means clustering is that we choose how many clusters we would like to create (typically we call that number k). We choose this based on domain knowledge (maybe we have some market research on the number of different types of groups we expect to see in our customers?), based on a ‘best-guess’, or randomly.

k均值聚类背后的主要思想是我们选择要创建多少个聚类(通常将其称为数字k)。 我们基于领域知识(也许我们已经对一些期望在客户中看到的不同类型的组的数量进行了一些市场研究?),基于“最佳猜测”或随机进行选择。

In the end you are left with areas that identify in which a cluster a newly assigned point would be classified.

最后,剩下的区域将标识新分配的点将在哪个集群中分类。

# Let’s start with 2 clusters= 2 features
kmeans = KMeans(n_clusters=2)
model = kmeans.fit(features_scaled)data_2 = data.copy()
data_2[‘labels’] = model.labels_data_2['labels'].value_counts() #check how many obs in each cluster
Image for post
Plotting Clusters
绘制集群

What’s the optimal number of clusters? While having too many clusters might mean that we haven’t actually learned much about the data — the whole point of clustering is to identify a relatively small number of similarities that exist in the dataset. Too few clusters might mean that we are grouping unlike samples together artificially.

最佳群集数是多少? 尽管聚类过多可能意味着我们实际上并未对数据了解太多,但是聚类的全部目的是识别数据集中存在的相对较少的相似性。 簇太少可能意味着我们将不同的样本人为地分组在一起。

There are many different methods for choosing the appropriate number of clusters, but one common method is calculating a metric for each number of clusters, then plotting the error function vs the number of clusters.

有许多不同的方法可以选择适当数量的群集,但是一种常见的方法是为每个群集数量计算一个度量,然后绘制误差函数与群集数量的关系。

  • Yellowbrick’s KElbowVisualizer: implements the “elbow” method of selecting the optimal number of clusters by fitting the K-Means model with a range of values for K.

    Yellowbrick的KElbowVisualizer:通过将K均值模型与K值的范围拟合来实现选择最佳聚类数的“肘”方法。

X = features_scaled
# Instantiate the clustering model and visualizer
model = KMeans()
visualizer = KElbowVisualizer(model, k=(1,10))visualizer.fit(X) # Fit the data to the visualizer
visualizer.show() # Finalize and render the figure
Image for post
we see that the model is fitted with 3 clusters — the “elbow” in the graph,
我们看到模型拟合了3个簇-图中的“肘”,

I used other two methodologies for unsupervised learning: Gaussian Mixture Models and HAC (Hierarchical Agglomerative Clustering). Please refer to the notebook.

我使用了其他两种方法进行无监督学习:高斯混合模型和HAC(分层聚集聚类)。 请参阅笔记本

最终产品 (FINAL PRODUCT)

Throughout this project, I have investigated the Spotify API, done Exploratory Data Analysis on disliked vs liked songs, as well as with my Musical Journey, and even dug into several supervised and unsupervised ML techniques.

在整个项目中,我研究了Spotify API ,对不喜欢的歌曲和喜欢的歌曲以及“我的音乐之旅”进行了探索性数据分析 ,甚至研究了几种有监督无监督的 ML技术。

Finally, I produced a final product where the user can input his (public) playlist URI and get an Exploratory Data Analysis on the songs’ features, as well as new song recommendations. This will be produced based on Unsupervised Learning techniques since we can assume the shared playlist will be a collection of liked songs and therefore we can’t apply labelling techniques.

最后,我制作了一个最终产品,用户可以在其中输入(公开)播放列表URI,并获得有关歌曲功能以及新歌曲推荐探索性数据分析。 由于我们可以假设共享播放列表将是喜欢的歌曲的集合,因此将基于无监督学习技术来制作,因此我们无法应用标签技术。

I’ve put together a super big function that can be found here. Let’s investigate the most important parts.

我整理了一个超大功能,可以在这里找到。 让我们研究最重要的部分。

1- Use the master function that takes the playlist URI and transforms it into a dataframe.

1-使用master函数获取播放列表URI,并将其转换为数据帧。

2- Create a super big playlist with musical diversity that will work as a library for music recommendations.

2-创建具有音乐多样性的超大播放列表,该列表将用作音乐推荐库。

3- Perform PCA technique with 3 components on the two playlists.

3-对两个播放列表中的 3个组件执行PCA技术

Image for post
Output example for friend’s playlist
朋友的播放列表的输出示例

4- Get recommendations with PCA and Nearest Neighbors. Export to PDF.

4-获取有关PCA最近邻居的建议 导出为PDF。

from scipy.spatial import KDTree
columns = [‘PC1’, ‘PC2’, ‘PC3’]kdB = KDTree(df_pca_all_songs[columns].values)
neighbours = kdB.query(df_pca[columns].values, k=1)[-1]#recomendations output: 30 songs you might like
recomendations = all_songs[all_songs.index.isin(neighbours[:31])]recomendations_output = recomendations[['title', 'main_artist']]
recomendations_output.columns = ['Song Title', 'Artist']
Image for post
Example of Recommendations (30 songs)
推荐示例(30首歌曲)

5- EDA with the Obamas playlist, Pitchfork top albums and songs and Billboard Top 100.

5-EDA, 包括奥巴马的播放列表 干草叉 最流行的专辑和歌曲以及 Billboard前100名

from sklearn.preprocessing import MinMaxScaler  #scaledata_scaled = pd.DataFrame(MinMaxScaler().fit_transform(data[features]), 
columns=data[features].columns)
data_scaled[‘Playlist’] = data[‘Playlist’]df_radar = data_scaled.groupby(‘Playlist’).mean().reset_index() \
.melt(id_vars=’Playlist’, var_name=”features”, value_name=”avg”) \
.sort_values(by=[‘Playlist’,’features’]).reset_index(drop=True)fig = px.line_polar(df_radar,
r=”avg”,
theta=”features”,
title=’Mean Values of Each Playlist Features’,
color=”Playlist”,
line_close=True,
line_shape=’spline’,
range_r=[0, 0.9],
color_discrete_sequence=px.colors.cyclical.mygbm[:-6]fig.show()
Image for post
Polar Graph
极坐标图

Hipster or Mainstream? Compare your taste to Billboard and Pitchfork

时髦还是主流? 比较您的口味与广告牌和干草叉

Image for post
High Fidelity TV Show
高保真电视节目
def big_graph(feature, label1="", label2="", label3 = ""):
sns.kdeplot(data[data['Playlist']=='Your Songs'][feature],label=label1)
sns.kdeplot(data[data['Playlist']=='Pitchfork'][feature],label=label2)
sns.kdeplot(data[data['Playlist']=='Billboard Top 100'][feature],label=label3)
plt.title(feature)
plt.grid(b=None)plots =[]plt.figure(figsize=(16,16))
plt.suptitle("Hipster or Mainstream?", fontsize="x-large")for i, f in enumerate(features):
plt.subplot(4,4,i+1)
if ((i+1)% 4 == 0) or (i+1==len(features)):
big_graph(f,label1="Your Songs", label2="Pitchfork", label3="Billboard Top 100")
else:
big_graph(f)plots.append(plt.gca())#the code is the same for the Obamas
Image for post
You vs Pitchfork vs Billboard Top 100
您vs干草叉vs广告牌百强

You an Obama? Compare your taste with the Obamas.

你是奥巴马吗? 将您的品味与奥巴马家族进行比较。

Image for post
Image for post
You vs The Obamas
你对奥巴马

6- Put together a Final Report PDF. et voilá!

6-汇总最终报告PDF等等!

Linkedin: https://www.linkedin.com/in/ritasousapereira/

Linkedin: https : //www.linkedin.com/in/ritasousapereira/

If you liked the article make sure to check the full Github with all the code that you can grab and explore! Also, if you have any suggestions regarding the project workflow or if you notice I did something wrong please be sure to let me know in the comments

如果您喜欢这篇文章,请确保检查完整的 Github 以及可以获取和探索的所有代码! 另外,如果您对项目工作流程有任何建议,或者您发现我做错了什么,请务必在评论中告知我

翻译自: https://medium.com/@ritasousabritopereira4/my-spotify-journey-towards-a-recommendation-system-2fac1701dda4

spotify下载

  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值