文章目录
一、KNN算法
1、算法简介
KNN算法(K-Nearest Neighbor algorithm,K最邻近方法)称为邻近算法,它是一种机器学习类的分类算法。
2、基本思想
该方法的思想非常简单:通俗来说就是“近朱者赤,近墨者黑”,测试样本的特征与哪一类别训练样本的特征最接近,那么该测试样本就属于这一类别。具体来说就是去找离测试样本的最接近的K个邻居(K根据情况取值),这K个邻居大多数属于哪个类别(或加权后权值属于哪个类别)那这个测试样本就属于哪一类别。
3、应用领域
KNN算法属于惰性学习,对数据比较包容的特征变量比较有效。常用于字符识别、文本分类、图像识别等领域。
4、算法流程
①准备数据,对数据进行预处理。
②计算测试样本点(也就是待分类的点)到其他每个样本点的距离。
③对每个距离进行排序,然后选出距离最小的K个点。
④对K个点所属的类别进行比较,根据少数服从多数的原则,将测试样本点归入在K个点占比最高的那一类。
5、欧式距离
对于n维空间中的两点
其距离公式为:
另外还有Euclidean距离、Mahalanobis距离、Manhattan距离,本文不加介绍,读者可自行查阅。
6、采用并行计算的原因
该方法本身的计算量很大,算法执行效率不高。另外如想要得到较准确的分类标准,则必须扩大一个类的样本容量。
二、MNIST数据集
1、基本介绍
本次项目用到的手写数据集是MNIST数据集,其每个样本都是一张28 * 28像素的灰度手写数字图片,整个数据集由60000个训练样本和10000个测试样本组成。每张图片都由784(28*28)个像素点组成,黑底白字,黑底用0表示,白字用0~1之间的浮点数表示,越接近1,颜色越白。
图片对应的标签以一个长度为10的
2、下载方式
py input_data模块
TensorFlow官方提供的input_data模块可以使用read_data_sets()函数自动加载数据集。
#第一次运行会自动下载到代码所在的路径下
from tensorflow.examples.tutorials.mnist import input_data
#./. 是想要保存的文件夹的路径
mnist = input_data.read_data_sets('./.', one_hot=True)
手动下载
官方网站(需要翻墙下载)
http://yann.lecun.com/exdb/mnist/
CSDN下载
https://download.csdn.net/download/giantroit/16787569
文件 | 内容 |
---|---|
train-labels-idx1-ubyte.gz | 训练集每张图片对应的标签 |
train-images-idx3-ubyte,gz | 55000个训练集,5000个验证集 |
t10k-labels-idx1-ubyte.gz | 测试集每张图片对应的标签 |
t10k-images-idx3-ubyte.gz | 1000个测试集 |
二、C语言代码实现
存储数据集元素的定义
//数据集中的每个元数据(图片),包括一共有多少元数据和单张图片大小
typedef struct {
int total;//一共有多少张图像
unsigned length;//每张有多长
} ex_data;
//KNN算法中对distance进行排序,
//取K个最近标签里出现频率最高.
typedef struct{
unsigned label;
int distance;
}label_distance;
数据集读入(分为图像数据集、标签数据集)
//字节序转换(大端小端转换)
//因为历史的原因,网络上数据的存储方式与本地不同,主机端是小端字节序,网络端是大端字节序,
//只有进行字节序转换后才能读入数据
int swap32(int x) {
return (((x >> 0) & 0xff) << 24) + (((x >> 8) & 0xff) << 16) +
(((x >> 16) & 0xff) << 8) + (((x >> 24) & 0xff) << 0);
}
//读入图像数据, path指文件路径, 返回图片数据指针,以一位数组形式连续存储.
ex_data read_image(const char *path, unsigned char **images){
//定义文件流,rb是指:打开一个二进制文件,文件必须存在,只允许读
FILE *fp = fopen(path, "rb");
//magic_number即幻数,用来标记文件或者协议的格式
//total表示一共有多少张图片(多少个元数据)
//row col行像素数,列像素数
unsigned magic_number, total, row, col;
//分别从文件流中读入
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
fread(&row, sizeof(unsigned), 1, fp);
//进行字节序转换
magic_number = swap32(magic_number);
total = swap32(total);
row = swap32(row);
col = swap32(col);
//定义存储变量
ex_data Data;
Data.total = total;
Data.length = row * col;
//申请图片数据内存
(*images) = (unsigned char *)malloc(total * Data.length * sizeof(unsigned char));
fread(*images, sizeof(unsigned char), total * Data.length, fp);
fclose(fp);
return Data;
}
//读入标签数据,path指文件路径, 返回标签数据指针,以一位数组形式连续存储.
void read_label(const char *path, unsigned char **labels){
FILE *fp = fopen(path, "rb");
unsigned magic_number, total;
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
magic_number = swap32(magic_number);
total = swap32(total);
//申请标签数据内存
*labels = (unsigned char *)malloc(total * sizeof(unsigned char));
fread(*labels, sizeof(unsigned char), total, fp);
fclose(fp);
}
另附fread函数
欧式距离计算
//vect1向量1, vect2向量2
//dimension 两向量的维度
//返回的是欧式距离的平方
int distance(unsigned char *vect1, unsigned char *vect2, int dimension) {
int ret = 0;
for (int i = 0; i < dimension; i++) {
int t = vect1[i] - vect2[i];
ret += t * t;
}
// 返回欧氏距离的平方
return ret;
}
比较函数
int compare(const void *a, const void *b) {
return ((label_distance *)a)->distance - ((label_distance *)b)->distance > 0 ? 1 : -1;
}
程序最终实现
#include <stdio.h>
#include <stdlib.h>
#define K 5
//数据集中的每个元数据(图片),包括一共有多少元数据和单张图片大小
typedef struct {
int total;//一共有多少张图像
unsigned length;//每张有多长
} ex_data;
//KNN算法中对distance进行排序,
//取K个最近标签里出现频率最高.
typedef struct{
unsigned label;
float distance;
}label_distance;
//字节序转换(大端小端转换)
//因为历史的原因,网络上数据的存储方式与本地不同,主机端是小端字节序,网络端是大端字节序,
//只有进行字节序转换后才能读入数据
int swap32(int x) {
return (((x >> 0) & 0xff) << 24) + (((x >> 8) & 0xff) << 16) +
(((x >> 16) & 0xff) << 8) + (((x >> 24) & 0xff) << 0);
}
//读入图像数据, path指文件路径, 返回图片数据指针,以一位数组形式连续存储.
ex_data read_image(const char *path, unsigned char **images){
//定义文件流,rb是指:打开一个二进制文件,文件必须存在,只允许读
FILE *fp = fopen(path, "rb");
//magic_number即幻数,用来标记文件或者协议的格式
//total表示一共有多少张图片(多少个元数据)
//row col行像素数,列像素数
unsigned magic_number, total, row, col;
//分别从文件流中读入
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
fread(&row, sizeof(unsigned), 1, fp);
//进行字节序转换
magic_number = swap32(magic_number);
total = swap32(total);
row = swap32(row);
col = swap32(col);
//定义存储变量
ex_data Data;
Data.total = total;
Data.length = row * col;
//申请图片数据内存
(*images) = (unsigned char *)malloc(total * Data.length * sizeof(unsigned char));
fread(*images, sizeof(unsigned char), total * Data.length, fp);
fclose(fp);
return Data;
}
//读入标签数据,path指文件路径, 返回标签数据指针,以一位数组形式连续存储.
void read_label(const char *path, unsigned char **labels){
FILE *fp = fopen(path, "rb");
unsigned magic_number, total;
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
magic_number = swap32(magic_number);
total = swap32(total);
//申请标签数据内存
*labels = (unsigned char *)malloc(total * sizeof(unsigned char));
fread(*labels, sizeof(unsigned char), total, fp);
fclose(fp);
}
//vect1向量1, vect2向量2
//dimension 两向量的维度
//返回的是欧式距离的平方
int distance(unsigned char *vect1, unsigned char *vect2, int dimension) {
int ret = 0;
for (int i = 0; i < dimension; i++) {
int t = vect1[i] - vect2[i];
ret += t * t;
}
// 返回欧氏距离的平方
return ret;
}
//简单的比大小
int compare(const void *a, const void *b) {
return (((label_distance *)a)->distance - ((label_distance *)b)->distance) > 0 ? 1 : -1;
}
int main() {
unsigned char *train_images, *test_images, *train_labels, *test_labels;
//调用已经写好的读函数
read_label("C:/Users/Administrator/Desktop/dataset/train-labels-idx1-ubyte", &train_labels);
read_label("C:/Users/Administrator/Desktop/dataset/t10k-labels-idx1-ubyte", &test_labels);
ex_data ex_train = read_image("C:/Users/Administrator/Desktop/dataset/train-images-idx3-ubyte", &train_images);
ex_data ex_test = read_image("C:/Users/Administrator/Desktop/dataset/t10k-images-idx3-ubyte", &test_images);
int len = ex_train.length;
//记录正确个数
int correct_counter = 0;
//定义结构体数组记录测试集每个元素对训练集每个元素的距离和其所对应的标签
label_distance *train_result = (label_distance *)malloc(ex_train.total * sizeof(label_distance));
for (int idx_test = 0; idx_test < ex_test.total; idx_test++)
{
// 对每个测试数据, 计算对所有训练集数据的距离
for (int idx_train = 0; idx_train < ex_train.total; idx_train++)
{
train_result[idx_train].label = train_labels[idx_train];
train_result[idx_train].distance =
distance(
test_images + (idx_test * len),
train_images + (idx_train * len),
len);
}
//对其进行排序
qsort(train_result, ex_train.total, sizeof(label_distance), compare);
int cnt[10] = {0}; // 统计 0~9 标签出现次数
int max_cnt = -1, pridiction = -1;
for (int i = 0; i < K; i++)
{
if (++cnt[train_result[i].label] > max_cnt)
{
max_cnt = cnt[train_result[i].label];
pridiction = train_result[i].label;
}
}
if (pridiction == test_labels[idx_test])
{
// 正确
correct_counter++;
}
}
//输出结果
printf("total correct: %d\n", correct_counter);
printf("accuracy: %lf\n", (double)correct_counter / (double)ex_test.total);
//释放内存
free(train_images);
free(train_labels);
free(test_images);
free(test_labels);
free(train_result);
return 0;
}
三、CUDA代码实现
#include <stdio.h>
#include <stdlib.h>
#include <string.h>
#include <cuda.h>
#include <cuda_runtime.h>
typedef struct {
int total;
unsigned length;
} ex_data;
typedef struct {
unsigned label;
int distance;
} label_distance;
//字节序转换(大端小端转换)
//因为历史的原因,网络上数据的存储方式与本地不同,主机端是小端字节序,网络端是大端字节序,
//只有进行字节序转换后才能读入数据
int swap32(int x) {
return (((x >> 0) & 0xff) << 24) + (((x >> 8) & 0xff) << 16) +
(((x >> 16) & 0xff) << 8) + (((x >> 24) & 0xff) << 0);
}
//读入图像数据, path指文件路径, 返回图片数据指针,以一位数组形式连续存储.
ex_data read_image(const char *path, unsigned char **images){
//定义文件流,rb是指:打开一个二进制文件,文件必须存在,只允许读
FILE *fp = fopen(path, "rb");
//magic_number即幻数,用来标记文件或者协议的格式
//total表示一共有多少张图片(多少个元数据)
//row col行像素数,列像素数
unsigned magic_number, total, row, col;
//分别从文件流中读入
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
fread(&row, sizeof(unsigned), 1, fp);
fread(&col, sizeof(unsigned), 1, fp);
//进行字节序转换
magic_number = swap32(magic_number);
total = swap32(total);
row = swap32(row);
col = swap32(col);
//定义存储变量
ex_data Data;
Data.total = total;
Data.length = row * col;
//申请图片数据内存
(*images) = (unsigned char *)malloc(total * Data.length * sizeof(unsigned char));
fread(*images, sizeof(unsigned char), total * Data.length, fp);
fclose(fp);
return Data;
}
//读入标签数据,path指文件路径, 返回标签数据指针,以一位数组形式连续存储.
void read_label(const char *path, unsigned char **labels){
FILE *fp = fopen(path, "rb");
unsigned magic_number, total;
fread(&magic_number, sizeof(unsigned), 1, fp);
fread(&total, sizeof(unsigned), 1, fp);
//端序转换
magic_number = swap32(magic_number);
total = swap32(total);
//申请标签数据内存
*labels = (unsigned char *)malloc(total * sizeof(unsigned char));
fread(*labels, sizeof(unsigned char), total, fp);
fclose(fp);
}
__device__ int distance(unsigned char *vec1, unsigned char *vec2, int dimension){
int res = 0;
int i;
for (i = 0; i < dimension; i++) {
int t = vec1[i] - vec2[i];
res += t * t;
}
return res;
}
//内核函数
__global__ void solve(unsigned char *train_images, unsigned char *train_labels,
unsigned char *test_images, unsigned char *test_labels,
ex_data train_ex, ex_data test_ex, int K,
int *d_correct_p) {
//
extern __shared__ label_distance dists[];
int length = train_ex.length;
//每个block线程块对应一组测试数据
//其中编号为x的block负责第x + (一个grid内block总数*i)组的测试集数据
int test_idx;
for (test_idx = blockIdx.x; test_idx < test_ex.total; test_idx += gridDim.x){
//记录距离
label_distance *tmp = (label_distance*)malloc(sizeof(label_distance) * K);
//数组全部初始化为int的最大值
int i;
for(i = 0;i < K;i++){
tmp[i].distance = 0x7fffffff;
}
// 每个thread线程对应一部分训练集数据
// 其中编号为x的thread对应了第 x + (一个block内thread总数*i)组的训练集
// 求出前K组与当前block负责的测试图片的最近的
int train_idx;
for (train_idx = threadIdx.x; train_idx < train_ex.total; train_idx+=blockDim.x){
int dist = distance(test_images + (test_idx * length),
train_images + (train_idx * length), length);
// 每次遍历都尝试将数据插入,使得 tmp 数组保存前 K 小的数据
int i;
for (i = 0; i < K; i++) {
if (dist < tmp[i].distance) {
for (int j = i; j < K - 1; j++) {
tmp[i + 1] = tmp[i];
}
tmp[i].distance = dist;
tmp[i].label = train_labels[train_idx];
break;
}
}
}
// 将每个 thread 的前 K 小数据复制进 dists 数组中
memcpy(dists + K * threadIdx.x, tmp, K * sizeof(label_distance));
__syncthreads();
// 等待全部线程计算并复制完毕
if (threadIdx.x == 0) {
// 经过这一轮的循环, 这个block负责的本张测试图片的所有距离均已算出
// 并获得了每个线程中前K小距离的数据, 在这里再次排序找到最终的前K小的数据
for (int i = 0; i < K; i++) {
// 冒泡排序前K个
for (int j = (K * blockDim.x) - 1; j > 0; j--) {
if (dists[j].distance < dists[j - 1].distance) {
label_distance tmp_ = dists[j];
dists[j] = dists[j - 1];
dists[j - 1] = tmp_;
}
}
}
// 统计距离前K小的标签个数并做出预测
int cnt[10] = {0}, max_cnt = -1, pridiction = -1;
for (int i = 0; i < K; i++) {
if (++cnt[dists[i].label] > max_cnt) {
max_cnt = cnt[dists[i].label];
pridiction = dists[i].label;
}
}
// 验证
if (pridiction == test_labels[test_idx]) {
// 多线程共享的正确数据个数, 统计时使用原子加法防止计数出现错误
atomicAdd(d_correct_p, 1);
}
}
free(tmp);
}
}
int main(int argc, char const *argv[]) {
int K = 5;
int block_num = 64;
int thread_num = 512;
if (argc >= 4) {
K = atoi(argv[1]);
block_num = atoi(argv[2]);
thread_num = atoi(argv[3]);
}
unsigned char *train_images, *test_images, *d_train_images, *d_test_images;
unsigned char *train_labels, *test_labels, *d_train_labels, *d_test_labels;
ex_data train_ex = read_image("./dataset/train-images-idx3-ubyte", &train_images);
ex_data test_ex = read_image("./dataset/t10k-images-idx3-ubyte", &test_images);
read_label("./dataset/train-labels-idx1-ubyte", &train_labels);
read_label("./dataset/t10k-labels-idx1-ubyte", &test_labels);
int len = train_ex.length;
cudaMalloc((void **)&d_train_images,
sizeof(unsigned char) * train_ex.total * len);
cudaMalloc((void **)&d_test_images,
sizeof(unsigned char) * test_ex.total * len);
cudaMalloc((void **)&d_train_labels,
sizeof(unsigned char) * train_ex.total);
cudaMalloc((void **)&d_test_labels,
sizeof(unsigned char) * test_ex.total);
cudaMemcpy(d_train_images, train_images,
train_ex.total * len * sizeof(unsigned char),
cudaMemcpyHostToDevice);
cudaMemcpy(d_test_images, test_images,
test_ex.total * len * sizeof(unsigned char),
cudaMemcpyHostToDevice);
cudaMemcpy(d_train_labels, train_labels,
train_ex.total * sizeof(unsigned char),
cudaMemcpyHostToDevice);
cudaMemcpy(d_test_labels, test_labels,
test_ex.total * sizeof(unsigned char),
cudaMemcpyHostToDevice);
free(train_images);
free(train_labels);
free(test_images);
free(test_labels);
int correct = 0;
int *d_correct_p;
cudaMalloc(&d_correct_p, sizeof(int));
solve<<<block_num, thread_num, K * thread_num * sizeof(label_distance)>>>(
d_train_images, d_train_labels, d_test_images, d_test_labels, train_ex,
test_ex, K, d_correct_p);
cudaMemcpy(&correct, d_correct_p, sizeof(int), cudaMemcpyDeviceToHost);
cudaFree(d_train_images);
cudaFree(d_train_labels);
cudaFree(d_test_images);
cudaFree(d_test_labels);
cudaFree(d_correct_p);
cudaDeviceSynchronize();
printf("total correct: %d\n", correct);
printf("accuracy: %lf\n", (double)correct / test_ex.total);
return 0;
}