【矿大】中国矿业大学 CUMT 信息安全管理与工程2024春季学期 OpenJudge OJ 问题 O: RBF模糊神经网络

题目

样例输入

0.2700 0.4100 0.5500 0.6900 0.8300
0.2700 0.4100 0.5500 0.6900 0.8300
0.5400 0.6200 0.7000 0.7800 0.8600
0.3848 0.2811 0.2217 0.2419 0.3272
0.3235 0.2408 0.2247 0.2868 0.3914
0.2292 0.1766 0.1506 0.1641 0.2098
0.1133 0.5441 0.6416 0.3303 -0.4410
0.3263 0.6515 0.8539 0.4928 0.0596
0.5984 0.9184 0.2150 0.9312 0.5549
0.7655 0.6699 1.3666 0.7282 -0.6318
0.5881 1.1443 0.9429 0.5102 -1.2708
0.1075 0.9739 0.0840 0.0870 -0.6957
0.7695 0.9608 0.2797 0.7561 -0.7203
0.1623 0.4535 0.8074 0.4950 0.6643
0.0022 0.7793 0.6175 0.2742 0.2068
0.5263 0.7267 0.7719 0.7041 -0.3275
0.4510 0.6411 0.6703 0.7366 0.0461
0.2336 0.2250 0.1408 0.8396 -0.6770
0.9845 0.0206 0.1951 0.4806 -0.0137
0.4390 0.8834 0.0683 0.6485 0.6552
-0.0025 0.6668 0.9248 0.1692 0.5034
0.5894 0.6431 0.4897 0.0732 0.4307
0.5822 0.2615 0.0873 0.0109 -0.0928
0.9709 0.8054 0.6852 0.4127 0.1268
0.5387 0.5728 0.3170 0.9931 0.2330
0.1576 0.7498 0.2125 0.8277 0.1333
0.2084 0.2126 0.1092 0.7904 0.2330
0.0869 0.8976 0.3294 0.6959 -0.2550
0.3805 0.0096 0.8607 0.2873 -0.1098
0.0121 -0.1351 -0.3671 0.6428 0.7876
0.6429 0.3749 -0.0041 0.9065 0.7212
0.0451 0.4604 0.9001 0.5920 -0.1004
0.9726 0.6070 0.9430 0.7130 -0.6330
0.7898 0.3574 0.7173 0.8620 0.6821
0.5647 0.2413 1.0497 0.6862 -0.2675
0.5925 1.2092 0.7281 0.8592 -1.0038
0.7019 0.7562 0.0876 0.7718 -0.0050
0.7144 0.3129 0.0237 0.9921 -0.1584
0.8831 0.3048 0.5648 0.6469 0.7286
0.3160 0.3182 0.3314 0.4406 0.2124
0.0236 0.2496 1.0824 0.7434 0.0983
0.1398 0.8074 0.3751 0.2727 -0.3629
0.4428 0.3058 0.1130 0.7417 -0.4846
0.5603 0.7398 0.1514 0.8297 0.1954
0.9715 0.1463 0.7718 0.8440 0.7972
0.2607 0.2781 0.7774 0.7396 0.0193
0.2343 0.9538 0.1147 0.2996 0.5494
0.5309 0.4997 0.7367 0.2485 0.0589
0.4485 0.5552 0.0041 0.0138 0.7628
0.2445 0.7197 -0.1878 0.8816 0.2943
0.6939 -0.1384 -0.3964 0.8750 0.5505
0.3383 0.1415 0.7800 0.6857 0.5268
0.0284 0.8510 0.5228 0.2440 0.6381
0.3197 0.7967 0.1983 0.0589 0.0485
0.3010 -0.1159 -0.4453 0.2390 0.4173
0.2652 0.0158 -0.6425 0.8096 -0.0764
0.4428 0.1321 0.8087 0.7040 0.0013
0.7499 0.8390 0.6381 0.0700 -0.0591
0.7244 0.1611 0.4911 0.4518 -0.1178
0.1612 0.1168 0.7519 0.5689 -0.3599
0.5799 0.6971 0.3235 0.1074 -0.5963
0.6237 0.5418 0.8237 0.3329 0.0089
0.8997 0.4762 0.1834 0.7195 -0.7169
0.7054 0.8577 0.2880 0.6434 0.2455
0.7821 0.7699 0.1059 0.8161 0.3116
0.0844 0.5153 0.6267 0.1923 0.3843
0.6897 0.0922 0.0435 0.6804 0.2389
0.9658 0.7602 0.4960 0.0490 0.0670
0.2621 0.8817 -0.0044 0.6554 -0.2147
0.1873 0.0375 0.0226 0.9516 0.0116
0.4774 -0.0406 0.8548 0.4148 -0.0669
0.3290 0.1361 0.0830 0.3986 0.1113
0.2031 0.3927 0.3130 0.7000 0.3177
0.7079 0.1451 0.4205 0.0990 0.2496
0.6291 0.0752 0.2461 0.0915 0.0408
0.5206 -0.3682 -0.2024 0.1853 0.2684
0.7734 0.1168 0.4436 0.0895 0.3585
0.9244 0.0845 0.3235 0.6838 0.3604
0.9185 -0.1760 0.6443 0.3700 -0.0272
0.8513 0.1241 -0.5765 0.4827 -0.0211
0.4773 -0.1296 -0.8765 0.1138 0.3881
0.6716 0.5077 0.9640 0.4677 0.2392
0.1383 0.5479 0.1464 0.9545 -0.5852
0.2604 0.0739 0.6549 0.8912 -0.5871
0.6553 0.3271 0.3388 0.1159 -0.3352
0.7407 -0.0106 0.1106 0.9499 -0.2090
0.9835 0.0049 -0.0392 0.9348 0.1169
0.9160 0.7012 0.7411 0.4677 0.2041
0.8687 0.7388 0.4405 0.7438 -0.3535
0.8039 0.7365 -0.1849 0.8532 -0.1366
0.1181 0.8123 0.7121 0.9658 0.3633
0.2937 0.8403 0.4632 0.1293 0.3472
0.9498 0.0453 0.5072 0.7453 0.1928
0.2355 0.6479 0.6857 0.3439 -0.2704
0.2038 0.7880 0.4050 0.8992 0.1824
0.6539 0.6859 0.0948 0.2029 0.2438
0.3334 -0.0815 0.1111 0.7159 1.2325
0.7254 0.2035 0.6527 0.8496 0.5593
0.4127 0.2707 -0.1865 0.6645 0.8276
0.2703 0.0061 0.3103 0.8375 0.1148
0.5924 -0.6289 -0.6292 0.7702 0.3792
0.1527 0.0240 0.4036 0.8602 0.2246
0.6507 0.3932 0.1216 0.8812 0.1380
0.8580 -0.1137 0.4130 0.4316 0.4629
0.9845 -0.5895 -0.5338 0.1881 0.6889
0.9366 -0.7058 -0.4674 0.3189 0.3795
0.0990 0.6414 0.2512 0.6010 -0.3285
0.3863 0.0356 0.1107 0.0294 0.0421
0.3458 0.1805 -0.1808 0.0524 0.0799
0.7167 0.1757 0.1533 0.2667 -0.2748
0.9834 0.3914 -0.2799 0.9736 0.1775
0.8953 0.6631 0.3320 0.4029 0.3510
0.9554 0.2033 0.3584 0.5131 -0.2075
0.3876 0.5357 -0.1301 0.6254 -0.0429
0.9757 0.7840 -0.3126 0.0682 0.5769
0.6959 0.2214 -0.1877 0.9891 -0.1501
0.8570 0.1660 0.1776 0.6035 0.0431
0.6550 0.7354 0.5956 0.8293 0.7247
0.1331 0.1009 -0.3367 0.0101 0.3596
0.1001 0.1937 -0.8616 0.9577 0.5508
0.6562 -0.1947 -0.3536 0.1379 0.1780
0.5128 0.7489 0.5035 0.8837 0.5551
0.5303 0.6976 0.1258 0.1606 0.3937
0.8002 0.1083 0.1943 0.3942 0.7652
0.3570 -0.3580 -0.4489 0.4512 0.6334
0.5260 -0.8240 -0.7687 1.0206 0.7637
0.1230 0.2258 0.4862 0.8944 0.3819
0.9850 0.0607 -0.1251 0.2223 0.1006
0.1940 0.5208 -0.3286 0.6253 0.4857
0.8966 0.1172 -0.7113 0.8131 0.0814
0.4929 -0.4165 -0.9335 0.9292 0.1888
-0.18164666 -1.264454 -0.7937 -0.1094 1.6704
5
0.7 0.4 0.6
0.3 0.6 0.9
0.1 0.4 0.7
0.8 0.4 0.3
0.5 0.6 0.3

样例输出

0.2900
0.3166
0.1815
0.3107
0.2914

做题思路

题目写了提供一个训练好的RBF神经网络,实际上是提供了训练好的网络权值,网络本身还是要自己搭建,不过难度不大,完全可以从零开始

养成面向对象的思想,这个神经网络可以封装成类,命名为RadialBasisFunction类,声明和方法都可以封装在NeuralNetworks这个命名空间里

由于我们需要先输入各层的权值而不需要自己用数据集训练,所以只需要把各层的权值存储起来即可,定义一些接口函数来设置权值和运算矩阵

void RadialBasisFunction::SetInput(vector<double> x)
	{
		X = x;
	}

	void RadialBasisFunction::SetmCAndmB(vector<vector<double>> t_c, vector<vector<double>> t_b)
	{
		c = t_c;
		b = t_b;
	}

	void RadialBasisFunction::SetWij(vector<vector<double>> t_wij)
	{
		wij = t_wij;
	}

	void RadialBasisFunction::SetWj(vector<double> t_wj)
	{
		wj_hty = t_wj;
	}

然后把每一层的函数都保存为一个方法

void RadialBasisFunction::Out2Fun()
	{
		for (int i = 0; i < i1; i++)
		{
			vector<double> tem1;
			for (int j = 0; j < j1; j++)
			{
				double tem2 = pow(X.at(i) - c.at(i).at(j), 2) / pow(b.at(i).at(j), 2);
				tem1.push_back(pow(Ee, -tem2));
			}
			out2.push_back(tem1);
		}
	}

	void RadialBasisFunction::Out3Fun()
	{
		for (int i = 0; i < j123; i++)
		{
			for (int j = 0; j < j123; j++)
			{
				for (int k = 0; k < j123; k++)
				{
					out3.push_back(out2.at(0).at(i) * out2.at(1).at(j) * out2.at(2).at(k));
				}
			}
		}
	}

	void RadialBasisFunction::Out4Fun()
	{
		for (int j = 0; j < j123; j++)
		{
			double sum = 0.0f;
			for (int i = 0; i < k3; i++)
			{
				sum += out3.at(i) * wij.at(i).at(j);
			}
			out4.push_back(logsig(sum));
		}
	}

	void RadialBasisFunction::Out5Fun()
	{
		double sum = 0.0f;
		for (int i = 0; i < j123; i++)
		{
			sum += wj_hty.at(i) * out4.at(i);
		}
		m_result = logsig(sum);
	}

	double RadialBasisFunction::logsig(double x)
	{
		return 1.0f / (1.0f + pow(Ee, -x));
	}

最终结果存入m_result这个类成员里,设置一个调用方法即可

注意事项

在主函数里运行代码时,如果不把每一次运行产生的垃圾数据清除,这些数据会一直累计并影响最终结果

清除代码如下

void RadialBasisFunction::ClearData()
	{
		X.clear();
		out2.clear();
		out3.clear();
		out4.clear();
	}

代码

最终代码为

#include <iostream>
#include <vector>
#include <cmath>

#define Ee 2.718281

using namespace std;

namespace NeuralNetworks
{
	class RadialBasisFunction
	{
	public:
		//空置构造函数(不需要)
		RadialBasisFunction();
		//输入
		void SetInput(vector<double> x);

		//输入c和b矩阵
		void SetmCAndmB(vector<vector<double>> t_c, vector<vector<double>> t_b);

		//输入第4层权重
		void SetWij(vector<vector<double>> t_wij);

		//输入第5层权重
		void SetWj(vector<double> t_wj);

		//运行
		void Run();

		//得到结果
		double GetResult() const;

		//清除上一次运算的数据
		void ClearData();

	private:
		//第二层数据
		vector<vector<double>> c;
		vector<vector<double>> b;
		vector<vector<double>> out2;

		const int i1 = 3;
		const int j1 = 5;

		void Out2Fun();


		//第三层数据
		vector<double> out3;

		const int j123 = 5;
		const int k3 = 125;

		void Out3Fun();

		//第四层数据
		vector<vector<double>> wij;
		vector<double> out4;
		vector<double> in4;

		void Out4Fun();

		//第五层数据
		vector<double> wj_hty;

		void Out5Fun();

		//通用函数
		double logsig(double x);

		//总输入
		vector<double> X;

		double m_result;
	};

	RadialBasisFunction::RadialBasisFunction()
	{
	}

	void RadialBasisFunction::SetInput(vector<double> x)
	{
		X = x;
	}

	void RadialBasisFunction::SetmCAndmB(vector<vector<double>> t_c, vector<vector<double>> t_b)
	{
		c = t_c;
		b = t_b;
	}

	void RadialBasisFunction::SetWij(vector<vector<double>> t_wij)
	{
		wij = t_wij;
	}

	void RadialBasisFunction::SetWj(vector<double> t_wj)
	{
		wj_hty = t_wj;
	}

	void RadialBasisFunction::Run()
	{
		Out2Fun();
		Out3Fun();
		Out4Fun();
		Out5Fun();
	}

	double RadialBasisFunction::GetResult() const
	{
		return m_result;
	}

	void RadialBasisFunction::Out2Fun()
	{
		for (int i = 0; i < i1; i++)
		{
			vector<double> tem1;
			for (int j = 0; j < j1; j++)
			{
				double tem2 = pow(X.at(i) - c.at(i).at(j), 2) / pow(b.at(i).at(j), 2);
				tem1.push_back(pow(Ee, -tem2));
			}
			out2.push_back(tem1);
		}
	}

	void RadialBasisFunction::Out3Fun()
	{
		for (int i = 0; i < j123; i++)
		{
			for (int j = 0; j < j123; j++)
			{
				for (int k = 0; k < j123; k++)
				{
					out3.push_back(out2.at(0).at(i) * out2.at(1).at(j) * out2.at(2).at(k));
				}
			}
		}
	}

	void RadialBasisFunction::Out4Fun()
	{
		for (int j = 0; j < j123; j++)
		{
			double sum = 0.0f;
			for (int i = 0; i < k3; i++)
			{
				sum += out3.at(i) * wij.at(i).at(j);
			}
			out4.push_back(logsig(sum));
		}
	}

	void RadialBasisFunction::Out5Fun()
	{
		double sum = 0.0f;
		for (int i = 0; i < j123; i++)
		{
			sum += wj_hty.at(i) * out4.at(i);
		}
		m_result = logsig(sum);
	}

	double RadialBasisFunction::logsig(double x)
	{
		return 1.0f / (1.0f + pow(Ee, -x));
	}

	void RadialBasisFunction::ClearData()
	{
		X.clear();
		out2.clear();
		out3.clear();
		out4.clear();
	}
}

vector<vector<double>> InputM(int row, int col)
{
	vector<vector<double>> m;
	for (int i = 0; i < row; i++)
	{
		vector<double> tem1;
		for (int j = 0; j < col; j++)
		{
			double tem2;
			cin >> tem2;
			tem1.push_back(tem2);
		}
		m.push_back(tem1);
	}
	return m;
}

vector<double> InputM(int row)
{
	vector<double> m;
	for (int i = 0; i < row; i++)
	{
		double tem2;
		cin >> tem2;
		m.push_back(tem2);
	}
	return m;
}

int main()
{
	NeuralNetworks::RadialBasisFunction* test = new NeuralNetworks::RadialBasisFunction();
	vector<vector<double>> t_c = InputM(3, 5);
	vector<vector<double>> t_b = InputM(3, 5);
	test->SetmCAndmB(t_c, t_b);
	test->SetWij(InputM(125, 5));
	test->SetWj(InputM(5));

	int n;
	cin >> n;

	vector<vector<double>> initx;
	for (int i = 0; i < n; i++)
	{
		vector<double> tem1;
		for (int j = 0; j < 3; j++)
		{
			double tem2;
			cin >> tem2;
			tem1.push_back(tem2);
		}
		initx.push_back(tem1);
	}
	for (int i = 0; i < n; i++)
	{
		test->SetInput(initx.at(i));
		test->Run();
		printf_s("%.4f\n", test->GetResult());
		test->ClearData();
	}
	return 0;
}

  • 10
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 1
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 1
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值