深度学习和机器学习 +博客_C ++机器学习库(无依赖项)

深度学习和机器学习 +博客

介绍 (Introduction)

Although Python has amazing scikit learn library, porting it into native C++ is difficult. Existing machine learning libraries in C++ have too many dependencies. So I have tried my level best to implement some of the most used algorithms in C++. This library is currently in development and I will add many algorithms in future.

尽管Python具有令人惊叹的scikit学习库,但将其移植到本机C ++中却很困难。 C ++中现有的机器学习库具有太多的依赖关系。 因此,我已经尽力在C ++中实现一些最常用的算法。 该库目前正在开发中,将来我将添加许多算法。

My github repository: https://github.com/VISWESWARAN1998/sklearn

我的github存储库: https : //github.com/VISWESWARAN1998/sklearn

标签编码 (Label Encoding)

Label encoding is the process of encoding the categorical data into numerical data. For example, if a column in the dataset contains country values like GERMANY, FRANCE, ITALY, then label encoder will convert this categorical data into numerical data like this:

标签编码是将分类数据编码为数字数据的过程。 例如,如果dataset一列包含国家值,例如GERMANYFRANCEITALY ,那么标签编码器会将此类分类数据转换为数字数据,如下所示:

Country(Categorical)Country(Numerical)
GERMANY1
FRANCE0
ITALY2
国家(分类) 国家(数字)
GERMANY 1
FRANCE 0
ITALY 2

Here is an example program using our library:

这是使用我们的库的示例程序:

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include <string>
#include "preprocessing.h"

int main()
{
    std::vector<std::string> categorical_data = { "GERMANY", "FRANCE", "ITALY" };
    LabelEncoder<std::string> encoder(categorical_data);
    std::vector<unsigned long int> numerical_data = encoder.fit_transorm();
    for (int i = 0; i < categorical_data.size(); i++)
    {
        std::cout << categorical_data[i] << " - " << numerical_data[i] << "\n";
    }
}

LabelBinarizer (LabelBinarizer)

Label binarize is the most suitable categorical variables like I.P addresses because sometimes while predicting, you may encounter a variable that is not present in the training. LabelEncoder will fail in this case as it has never seen the categorical data before so it cannot convert it into numerical data. But LabelBinarizer works similar to one hot encoder and it will encode all the values to 0 if there is something new while predicting. Below is an example:

标签二值化是最合适的分类变量,例如IP地址,因为有时在预测时,您可能会遇到训练中不存在的变量。 在这种情况下, LabelEncoder将失败,因为它之前从未见过分类数据,因此无法将其转换为数字数据。 但是LabelBinarizer工作方式类似于一种热编码器,如果在预测时有新内容,它将将所有值编码为0 。 下面是一个示例:

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include <string>
#include "preprocessing.h"

int main()
{
    std::vector<std::string> ip_addresses = { "A", "B", "A", "B", "C" };
    LabelBinarizer<std::string> binarize(ip_addresses);
    std::vector<std::vector<unsigned long int>> result = binarize.fit();
    for (std::vector<unsigned long int> i : result)
    {
        for (unsigned long int j : i) std::cout << j << " ";
        std::cout << "\n";
    }
    // Predict
    std::cout << "Prediction:\n-------------\n";
    std::string test = "D";
    std::vector<unsigned long int> prediction = binarize.predict(test);
    for (unsigned long int i : prediction) std::cout << i << " ";
}

In the above code, we have a feature column of something like this:

在上面的代码中,我们有一个功能栏,如下所示:

A
B
A
B
C

But while predicting, we encounter something entirely new, say "D" from the above example, this is what Label Binarizer will produce.

但是在进行预测时,我们遇到了一些全新的东西,在上面的示例中说“ D ”,这就是Label Binarizer将产生的东西。

输出量 (Output)
1 0 0
0 1 0
1 0 0
0 1 0
0 0 1
Prediction:
-------------
0 0 0

Binarizer will find total unique values in training, i.e., A, B, C and will mark each row with 1 where the value is present.

Binarizer将在训练中找到总的唯一值,即A,B,C,并将在存在该值的行中标记为1。

标准化 (Standardization)

StandardScaler will standardize features by removing the mean and scaling to unit variance. Python's scikit learn offers this in the name of "StandardScaler" refer more in their documentation: https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html

StandardScaler将通过去除均值并缩放到单位方差来标准化特征。 Python的scikit learning以“ StandardScaler ”的名称提供此功能,请参阅其文档中的更多信息: https : //scikit-learn.org/stable/modules/generated/sklearn.preprocessing.StandardScaler.html

Our library offers two methods:

我们的库提供两种方法:

  1. scale: Removes mean and scales to unit variance

    scale :去除均值并缩放至单位方差

  2. inverse_scale: Does the opposite

    inverse_scale :相反

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "preprocessing.h"

int main()
{
    StandardScaler scaler({0, 0, 1, 1});
    std::vector<double> scaled = scaler.scale();
    // Scaled value and inverse scaling
    for (double i : scaled)
    {
        std::cout << i << " " << scaler.inverse_scale(i) << "\n";
    }
}

正常化 (Normalization)

Normalization helps in speeding up the training time.

标准化有助于加快训练时间。

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "preprocessing.h"

int main()
{
    std::vector<double> normalized_vec = 
         preprocessing::normalize({ 800, 10, 12, 78, 56, 49, 7, 1200, 1500 });
    for (double i : normalized_vec) std::cout << i << " ";
}

简单线性回归 (Simple Linear Regression)

Simple linear regression consists of only one independent variable X and dependent variable y. We will use https://www.kaggle.com/venjktry/simple-linear-regression/data dataset from kaggle.

简单线性回归仅包含一个自变量X和因变量y 。 我们将使用kaggle的https://www.kaggle.com/venjktry/simple-linear-regression/data数据集。

Here is an example code which is trained on the above data:

这是一个根据上述数据训练的示例代码:

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include <string>
#include <fstream>
#include "lsr.h"

int main()
{
    simple_linear_regression slr({ 24.0, 50.0, 15.0, 38.0, 87.0, 36.0, 
           12.0, 81.0, 25.0, 5.0, 16.0, 16.0, 24.0, 39.0, 54.0, 60.0, 
           26.0, 73.0, 29.0, 31.0, 68.0, 87.0, 58.0, 54.0, 84.0, 58.0, 
           49.0, 20.0, 90.0, 48.0, 4.0, 25.0, 42.0, 0.0, 60.0, 93.0, 39.0, 
           7.0, 21.0, 68.0, 84.0, 0.0, 58.0, 19.0, 36.0, 19.0, 59.0, 51.0, 
           19.0, 33.0, 85.0, 44.0, 5.0, 59.0, 14.0, 9.0, 75.0, 69.0, 10.0, 
           17.0, 58.0, 74.0, 21.0, 51.0, 19.0, 50.0, 24.0, 0.0, 12.0, 75.0, 
           21.0, 64.0, 5.0, 58.0, 32.0, 41.0, 7.0, 4.0, 5.0, 49.0, 90.0, 3.0, 
           11.0, 32.0, 83.0, 25.0, 83.0, 26.0, 76.0, 95.0, 53.0, 77.0, 42.0, 
           25.0, 54.0, 55.0, 0.0, 73.0, 35.0, 86.0, 90.0, 13.0, 46.0, 46.0, 
           32.0, 8.0, 71.0, 28.0, 24.0, 56.0, 49.0, 79.0, 90.0, 89.0, 41.0, 
           27.0, 58.0, 26.0, 31.0, 70.0, 71.0, 39.0, 7.0, 48.0, 56.0, 45.0, 
           41.0, 3.0, 37.0, 24.0, 68.0, 47.0, 27.0, 68.0, 74.0, 95.0, 79.0, 
           21.0, 95.0, 54.0, 56.0, 80.0, 26.0, 25.0, 8.0, 95.0, 94.0, 54.0, 
           7.0, 99.0, 36.0, 48.0, 65.0, 42.0, 93.0, 86.0, 26.0, 51.0, 100.0, 
           94.0, 6.0, 24.0, 75.0, 7.0, 53.0, 73.0, 16.0, 80.0, 77.0, 89.0, 80.0, 
           55.0, 19.0, 56.0, 47.0, 56.0, 2.0, 82.0, 57.0, 44.0, 26.0, 52.0, 41.0, 
           44.0, 3.0, 31.0, 97.0, 21.0, 17.0, 7.0, 61.0, 10.0, 52.0, 10.0, 65.0, 
           71.0, 4.0, 24.0, 26.0, 51.0 }, { 21.54945196, 47.46446305, 17.21865634, 
           36.58639803, 87.28898389, 32.46387493, 10.78089683, 80.7633986, 24.61215147, 
           6.963319071, 11.23757338, 13.53290206, 24.60323899, 39.40049976, 48.43753838, 
           61.69900319, 26.92832418, 70.4052055, 29.34092408, 25.30895192, 69.02934339, 
           84.99484703, 57.04310305, 50.5921991, 83.02772202, 57.05752706, 47.95883341, 
           24.34226432, 94.68488281, 48.03970696, 7.08132338, 21.99239907, 42.33151664, 
           0.329089443, 61.92303698, 91.17716423, 39.45358014, 5.996069607, 22.59015942, 
           61.18044414, 85.02778957, -1.28631089, 61.94273962, 21.96033347, 33.66194193, 
           17.60946242, 58.5630564, 52.82390762, 22.1363481, 35.07467353, 86.18822311, 
           42.63227697, 4.09817744, 61.2229864, 17.70677576, 11.85312574, 80.23051695, 
           62.64931741, 9.616859804, 20.02797699, 61.7510743, 71.61010303, 23.77154623, 
           51.90142035, 22.66073682, 50.02897927, 26.68794368, 0.376911899, 6.806419002, 
           77.33986001, 28.90260209, 66.7346608, 0.707510638, 57.07748383, 28.41453196, 
           44.46272123, 7.459605998, 2.316708112, 4.928546187, 52.50336074, 91.19109623, 
           8.489164326, 6.963371967, 31.97989959, 81.4281205, 22.62365422, 78.52505087, 
           25.80714057, 73.51081775, 91.775467, 49.21863516, 80.50445387, 50.05636123, 
           25.46292549, 55.32164264, 59.1244888, 1.100686692, 71.98020786, 30.13666408, 
           83.88427405, 89.91004752, 8.335654576, 47.88388961, 45.00397413, 31.15664574, 
           9.190375682, 74.83135003, 30.23177607, 24.21914027, 57.87219151, 50.61728392, 
           78.67470043, 86.236707, 89.10409255, 43.26595082, 26.68273277, 59.46383041, 
           28.90055826, 31.300416, 71.1433266, 68.4739206, 39.98238856, 4.075776144, 
           47.85817542, 51.20390217, 43.9367213, 38.13626679, 3.574661632, 36.4139958, 
           22.21908523, 63.5312572, 49.86702787, 21.53140009, 64.05710234, 70.77549842, 
           92.15749762, 81.22259156, 25.10114067, 94.08853397, 53.25166165, 59.16236621, 
           75.24148428, 28.22325833, 25.33323728, 6.364615703, 95.4609216, 88.64183756, 
           58.70318693, 6.815491279, 99.40394676, 32.77049249, 47.0586788, 60.53321778, 
           40.30929858, 89.42222685, 86.82132066, 26.11697543, 53.26657596, 96.62327888, 
           95.78441027, 6.047286687, 24.47387908, 75.96844763, 3.829381009, 52.51703683, 
           72.80457527, 14.10999096, 80.86087062, 77.01988215, 86.26972444, 77.13735466, 
           51.47649476, 17.34557531, 57.72853572, 44.15029394, 59.24362743, -1.053275611, 
           86.79002254, 60.14031858, 44.04222058, 24.5227488, 52.95305521, 43.16133498, 
           45.67562576, -2.830749501, 29.19693178, 96.49812401, 22.5453232, 20.10741433, 
           4.035430253, 61.14568518, 13.97163653, 55.34529893, 12.18441166, 64.00077658, 
           70.3188322, -0.936895047, 18.91422276, 23.87590331, 47.5775361 }, DEBUG);
    slr.fit();
    std::vector<double> test = { 45.0, 91.0, 61.0, 10.0, 47.0, 33.0, 84.0, 24.0, 48.0, 
    48.0, 9.0, 93.0, 99.0, 8.0, 20.0, 38.0, 78.0, 81.0, 42.0, 95.0, 78.0, 44.0, 68.0, 87.0, 
    58.0, 52.0, 26.0, 75.0, 48.0, 71.0, 77.0, 34.0, 24.0, 70.0, 29.0, 76.0, 98.0, 28.0, 87.0, 
    9.0, 87.0, 33.0, 64.0, 17.0, 49.0, 95.0, 75.0, 89.0, 81.0, 25.0, 47.0, 50.0, 5.0, 68.0, 
    84.0, 8.0, 41.0, 26.0, 89.0, 78.0, 34.0, 92.0, 27.0, 12.0, 2.0, 22.0, 0.0, 26.0, 50.0, 
    84.0, 70.0, 66.0, 42.0, 19.0, 94.0, 71.0, 19.0, 16.0, 49.0, 29.0, 29.0, 86.0, 50.0, 
    86.0, 30.0, 23.0, 20.0, 16.0, 57.0, 8.0, 8.0, 62.0, 55.0, 30.0, 86.0, 62.0, 
    51.0, 61.0, 86.0, 61.0, 21.0 };
    for (double i : test)
    {
        std::ofstream file;
        file.open("out.txt", std::ios::app);
        file << slr.predict(i);
        file << "\n";
        file.close();
    }
    slr.save_model("model.sklearn");
    int stay;
    std::cin >> stay;
}

We have visualized our C++ model's prediction:

我们已经可视化了C ++模型的预测:

Image 1

class: simple_linear_regression

类别simple_linear_regression

训练新模型的构造函数 (Constructor for Training a New Model)
// independent variable X, dependent variable y, DEBUG/NODEBUG will print verbose messages
simple_linear_regression(std::vector<double> X, std::vector<double> y, unsigned short verbose);

We then use the fit method to train the model.

然后,我们使用拟合方法来训练模型。

加载保存的模型的构造函数 (Constructor for Loading the Saved Model)
simple_linear_regression(std::string model_name);

多元线性回归 (Multiple Linear Regression)

Simple linear regression has only one independent variable whereas multiple linear regression has many two or more independent variables. Solving this involves matrix algebra and it consumes some adequate time if you have many independent variables and a bigger dataset.

简单线性回归只有一个自变量,而多元线性回归有很多两个或多个自变量。 解决这个问题涉及矩阵代数,如果您有许多自变量和更大的dataset ,它会花费一些时间。

Here is an example dataset on predicting the test score from IQ and study hours from stattrek:

这是一个示例dataset ,可根据IQ预测测试分数并根据stattrek预测学习时间:

Image 2
训练和保存模型 (Training and Saving the Model)
// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "mlr.h"

int main()
{
    LinearRegression mlr({ {110, 40}, {120, 30}, {100, 20}, {90, 0}, 
                         {80, 10} }, {100, 90, 80, 70, 60}, NODEBUG);
    mlr.fit();
    std::cout << mlr.predict({ 110, 40 });
    mlr.save_model("model.json");
}

加载保存的模型 (Loading the Saved Model)
// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "mlr.h"

int main()
{
    // Don't use fit method here
    LinearRegression mlr("model.json");
    std::cout << mlr.predict({ 110, 40 });
}

逻辑回归 (Logistic Regression)

Please do not get confused with the word "regression" in Logistic regression. It is generally used for classification problems. The heart of the logistic regession is sigmoid activation function. An activation function is a function which takes any input value and outputs value within a certain case. In our case(sigmoid), it returns between 0 and 1.

请不要与Logistic回归中的“回归”一词混淆。 它通常用于分类问题。 逻辑回归的核心是乙状结肠激活功能。 激活功能是在特定情况下可以获取任何输入值并输出值的功能。 在我们的情况(Sigmoid)中,它返回0到1之间的值。

In the image, you can see the output(y) of sigmoid activation function for -3 >= x <= 3

在图像中,您可以看到-3> = x <= 3的S形激活函数的输出(y)

Image 3

The idea behind the logistic regression is taking the output from linear regression, i.e., y = mx+c and applying logistic function 1/(1+e^-y) which outputs the value between 0 and 1. We can clearly see this is a binary classifier, i.e., for example, it can be used for classifying binary datasets like predicting whether it is a male or a female using certain parameters.

logistic回归的思想是从线性回归中获取输出,即y = mx + c并应用logistic函数1 /(1 + e ^ -y)输出0到1之间的值。我们可以清楚地看到这是二进制分类器,即,例如,它可以用于对二进制数据集进行分类,例如使用某些参数预测它是雄性还是雌性。

But we can use this logistic regression to classify multi-class problems too with some modifications. Here, we are using the one vs rest principle. That is training many linear regression models, for example, if the class count is 10, it will train 10 Linear Regression models by changing the class values with 1 as the class value to predict the probability and 0 to the rest. If you don't understand, here is a detailed explanation: https://prakhartechviz.blogspot.com/2019/02/multi-label-classification-python.html

但是我们也可以使用这种逻辑回归对某些类别的问题进行一些修改。 在这里,我们使用的是一对一原则。 那就是训练许多线性回归模型,例如,如果类别计数为10 ,它将通过更改类别值(其中类别值为1来预测概率,而其他类别为0)来训练10个线性回归模型。 如果您听不懂,这里有详细的解释: https : //prakhartechviz.blogspot.com/2019/02/multi-label-classification-python.html

We are going to take a simple classification problem to classify whether it is a male or female.

我们将采用一个简单的分类问题来对它是男性还是女性进行分类。

Classification male - female using height, weight, foot size and saving the model. Here is our dataset:

分类男性-女性使用身高,体重,足部大小并保存模型。 这是我们的dataset

Image 4

All we have to do is to predict whether the person is male or female using height, weight and foot size.

我们要做的就是利用身高,体重和脚的大小来预测这个人是男性还是女性。

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "logistic_regression.h"

int main()
{
    logistic_regression lg({ { 6, 180, 12 },{ 5.92, 190, 11 },{ 5.58, 170, 12 },
        { 5.92, 165, 10 },{ 5, 100, 6 },{ 5.5, 150, 8 },{ 5.42, 130, 7 },{ 5.75, 150, 9 } },
        { 0, 0, 0, 0, 1, 1, 1, 1 }, NODEBUG);
    lg.fit();
    // Save the model
    lg.save_model("model.json");
    std::map<unsigned long int, double> probabilities = lg.predict({ 6, 130, 8 });
    double male = probabilities[0];
    double female = probabilities[1];
    if (male > female) std::cout << "MALE";
    else std::cout << "FEMALE";
}

and loading a saved model:

并加载保存的模型:

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "logistic_regression.h"

int main()
{
    logistic_regression lg("model.json");
    std::map<unsigned long int, double> probabilities = lg.predict({ 6, 130, 8 });
    double male = probabilities[0];
    double female = probabilities[1];
    if (male > female) std::cout << "MALE";
    else std::cout << "FEMALE";
}

高斯朴素贝叶斯 (Gaussian Naive Bayes)

Classification male - female using height, weight, foot size and saving the model. Here is our dataset:

分类男性-女性使用身高,体重,足部大小并保存模型。 这是我们的dataset

Image 5

训练模型 (Training a Model)

// SWAMI KARUPPASWAMI THUNNAI

#include "naive_bayes.h"

int main()
{
    gaussian_naive_bayes nb({ {6, 180, 12}, {5.92, 190, 11}, {5.58, 170, 12},
    {5.92, 165, 10}, {5, 100, 6}, {5.5, 150, 8}, {5.42, 130, 7}, {5.75, 150, 9} },
    { 0, 0, 0, 0, 1, 1, 1, 1 }, DEBUG);
    nb.fit();
    nb.save_model("model.json");
    std::map<unsigned long int, double> probabilities = nb.predict({ 6, 130, 8 });
    double male = probabilities[0];
    double female = probabilities[1];
    if (male > female) std::cout << "MALE";
    else std::cout << "FEMALE";
}

加载保存的模型 (Loading a Saved Model)

// SWAMI KARUPPASWAMI THUNNAI

#include "naive_bayes.h"

int main()
{
    gaussian_naive_bayes nb(NODEBUG);
    nb.load_model("model.json");
    std::map<unsigned long int, double> probabilities = nb.predict({ 6, 130, 8 });
    double male = probabilities[0];
    double female = probabilities[1];
    if (male > female) std::cout << "MALE";
    else std::cout << "FEMALE";
}

训练更大的数据集 (Training Bigger Datasets)

Usually, machine learning datasets are huge even simple alogrithms perform well when enough data is provided[8]. We cannot write the dataset as vectors in the source code itself, we need some way to take the dataset from the data dynamically. Here, we will convert the dataset into JSON format using your favorite programming language like this.

通常,机器学习数据集是巨大的,即使简单的算法在提供足够数据的情况下也能表现良好[8]。 我们无法在源代码本身中将数据集写为向量,我们需要某种方式从数据中动态获取数据集。 在这里,我们将使用您喜欢的编程语言将数据集转换为JSON格式。

You can see the dataset here: https://github.com/VISWESWARAN1998/sklearn/blob/master/datasets/boston_house_prices.json

您可以在此处查看数据集: https : //github.com/VISWESWARAN1998/sklearn/blob/master/datasets/boston_house_prices.json

Image 6

where max_index is total rows present and every row has X and y which is independent and dependent variables. Once trained, we can use noob_pandas class which is shipped with this library to get the independent and dependent variables. Here, I will show you how to train famous Boston Housing dataset.

其中max_index是存在的总行,每行都有X和y,它们是独立变量和因变量。 经过培训后,我们可以使用noob_pandas 该库附带的类,以获取独立变量和因变量。 在这里,我将向您展示如何训练著名的波士顿房屋数据集。

https://medium.com/@yharsh800/boston-housing-linear-regression-robust-regression-9be52132def4

https://medium.com/@yharsh800/boston-housing-linear-regression-robust-regression-9be52132def4

The labels present in the dataset:

数据集中存在的标签:

Image 7

and a few values of how our data looks like:

以及有关数据外观的一些值:

Image 8

使用波士顿数据集进行训练和预测 (Training and Predicting Using Boston Dataset)

// SWAMI KARUPPASWAMI THUNNAI

#include <iostream>
#include "mlr.h"
#include "noob_pandas.h"

int main()
{
    // For classification use unsigned long int instead of double
    noob_pandas<double> dataset("boston_house_prices.json");
    LinearRegression mlr(dataset.get_X(), dataset.get_y(), NODEBUG);
    mlr.fit();
    std::cout << mlr.predict({ 0.02729, 0.0, 7.07, 0.0, 0.469, 
                               7.185, 61.1, 4.9671, 2.0, 242.0, 17.8, 392.83, 4.03 });
}

*Note: I will post how I made the json dataset in the comment below, I have used Python programming language and I am sure you can use other language you wish.

*注意:我将在下面的注释中发布如何制作json数据集,我使用了Python编程语言,并且我确定您可以使用您想要的其他语言。

参考文献 (References)

  1. https://scikit-learn.org/stable/

    https://scikit-learn.org/stable/

  2. https://hackernoon.com/implementation-of-gaussian-naive-bayes-in-python-from-scratch-c4ea64e3944d

    https://hackernoon.com/implementation-of-gaussian-naive-bayes-in-python-from-scratch-c4ea64e3944d

  3. https://www.mathsisfun.com/data/least-squares-regression.html

    https://www.mathsisfun.com/data/least-squares-regression.html

  4. https://www.antoniomallia.it/lets-implement-a-gaussian-naive-bayes-classifier-in-python.html

    https://www.antoniomallia.it/lets-implement-a-gaussian-naive-bayes-classifier-in-python.html

  5. https://www.geeksforgeeks.org/adjoint-inverse-matrix/

    https://www.geeksforgeeks.org/adjoint-inverse-matrix/

  6. https://stattrek.com/multiple-regression/regression-coefficients.aspx

    https://stattrek.com/multiple-regression/regression-coefficients.aspx

  7. https://en.wikipedia.org/wiki/Sigmoid_function

    https://zh.wikipedia.org/wiki/Sigmoid_function

  8. http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/35179.pdf

    http://static.googleusercontent.com/media/research.google.com/fr//pubs/archive/35179.pdf

翻译自: https://www.codeproject.com/Articles/5246467/Machine-Learning-Library-for-Cplusplus-No-Dependen

深度学习和机器学习 +博客

  • 1
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值