非学习型单层感知机的java实现(日志三)

要求如下:

    

Image(16)

 

       

            所以当神经元输出函数选择在硬极函数的时候,如果想分成上面的四个类型,则必须要2个神经元,其实至于所有的分类问题,n个神经元则可以分成2的n次方类型。

 

又前一节所证明出来的关系有:

Image(17)

     从而算出了所有的权重的值。。

  

代码实现如下:

   

      第一个类是用来操实际操作的类,真正核心的内容是在PerceptronClassifyNoLearn中。

package com.cgrj.com;

import java.util.Arrays;

import org.neuroph.core.data.DataSet;
import org.neuroph.core.data.DataSetRow;
import org.neuroph.nnet.Perceptron;

public class MyNeturol {

    public static void main(String[] args) {
        // TODO Auto-generated method stub
        DataSet trainingSet=new DataSet(2,2);
        trainingSet.addRow(new DataSetRow(new double[]{1,2},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{1,1},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{2,0},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{2,-1},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{-1,2},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{-2,1},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{-1,-1},new double[]{Double.NaN,Double.NaN}));
        trainingSet.addRow(new DataSetRow(new double[]{-2,-2},new double[]{Double.NaN,Double.NaN}));
        
        PerceptronClassifyNoLearn perceptronClassifyNoLearn=new PerceptronClassifyNoLearn(2);
    
        for(DataSetRow row:trainingSet.getRows()){
            perceptronClassifyNoLearn.setInput(row.getInput());
            perceptronClassifyNoLearn.calculate();
            double[] netWorkOutput=perceptronClassifyNoLearn.getOutput();
            System.out.println(Arrays.toString(row.getInput())+"="+Arrays.toString(netWorkOutput));
            
        }
        
        
        
    }

}

 

     PerceptronClassifyNoLearn规定了输入层和输出层的属性和规则,由于是无法学的,所以其判定规则是依然设定好了的,在此类中。

    

package com.cgrj.com;

import org.neuroph.core.Layer;
import org.neuroph.core.NeuralNetwork;
import org.neuroph.core.Neuron;
import org.neuroph.nnet.comp.neuron.BiasNeuron;
import org.neuroph.nnet.comp.neuron.InputNeuron;
import org.neuroph.util.ConnectionFactory;
import org.neuroph.util.LayerFactory;
import org.neuroph.util.NeuralNetworkFactory;
import org.neuroph.util.NeuralNetworkType;
import org.neuroph.util.NeuronProperties;
import org.neuroph.util.TransferFunctionType;

public class PerceptronClassifyNoLearn extends NeuralNetwork {
    
      
        public PerceptronClassifyNoLearn(int inputNeuronsCount){
            this.createNetWork(inputNeuronsCount);
            
        }

        private void createNetWork(int inputNeuronsCount) {
            //设置网络感知机
            this.setNetworkType(NeuralNetworkType.PERCEPTRON);
            
            //构建输入神经元,表示输入的刺激
            NeuronProperties inputNeuronProperties=new NeuronProperties();
            inputNeuronProperties.setProperty("neuronType", InputNeuron.class);
            
            //由输入神经元构成的输入层
            Layer inputLayer=LayerFactory.createLayer(inputNeuronsCount,inputNeuronProperties);
            this.addLayer(inputLayer);
            //给输入层增加BiasNeron,表示神经元偏置
            inputLayer.addNeuron(new BiasNeuron());
            
            //构建输出神经元
            NeuronProperties outputNeuronProperties=new NeuronProperties();
            outputNeuronProperties.setProperty("transferFunction", TransferFunctionType.STEP);
            Layer outputLayer=LayerFactory.createLayer(2, outputNeuronProperties);
            this.addLayer(outputLayer);
            
            ConnectionFactory.fullConnect(inputLayer, outputLayer);
            NeuralNetworkFactory.setDefaultIO(this);
            Neuron n=outputLayer.getNeuronAt(0);
            n.getInputConnections()[0].getWeight().setValue(-3);
            n.getInputConnections()[1].getWeight().setValue(-1);
            n.getInputConnections()[2].getWeight().setValue(1);
            
            
            n=outputLayer.getNeuronAt(1);
            n.getInputConnections()[0].getWeight().setValue(1);
            n.getInputConnections()[1].getWeight().setValue(-2);
            n.getInputConnections()[2].getWeight().setValue(0);
            
                           
            
        }
}

 

   可以应用于象限的判定,修改上面的代码如下:

           

Neuron n=outputLayer.getNeuronAt(0);
            n.getInputConnections()[0].getWeight().setValue(0);
            n.getInputConnections()[1].getWeight().setValue(1);
            n.getInputConnections()[2].getWeight().setValue(0);
            
            
            n=outputLayer.getNeuronAt(1);
            n.getInputConnections()[0].getWeight().setValue(1);
            n.getInputConnections()[1].getWeight().setValue(0);
            n.getInputConnections()[2].getWeight().setValue(0);

 

       则有第一个用来判定位于y的方向,第一个神经元则用来判定位于x轴的方向

 

     

switch (Arrays.toString(netWorkOutput)) {
            case "[1.0, 1.0]":
                str="第一象限";
                break;
            case "[0.0, 1.0]":
                str="第四象限";
                break;
            case "[1.0, 0.0]":
                str="第二象限";
                break;
            case "[0.0, 0.0]":
                str="第三象限";
                break;

            default:
                break;
            }
            
            System.out.println(Arrays.toString(row.getInput())+"="+Arrays.toString(netWorkOutput)+"---属于"+str);

 

      这样就会有打印的结果了。。

 

     运行截图(这里忽略坐标轴的影响,由于输出函数的特殊,所以把0当成负数看):

 

       

 

       下一篇,将具体分析每个类和每个方法的含义,及其实现的原理。。。

      

   

 

转载于:https://www.cnblogs.com/beigongfengchen/p/5462020.html

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
很抱歉,单层感知机无法拟合XOR函数的问题已经被证明,因此单层感知机已经不再使用。但是,我们可以介绍一下多层感知机的PyTorch实现。 以下是一个简单的多层感知机的PyTorch实现: ```python import torch import torch.nn as nn # 定义多层感知机模型 class MLP(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(MLP, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.relu = nn.ReLU() self.fc2 = nn.Linear(hidden_size, output_size) def forward(self, x): out = self.fc1(x) out = self.relu(out) out = self.fc2(out) return out # 定义模型参数 input_size = 2 hidden_size = 5 output_size = 2 # 初始化模型 model = MLP(input_size, hidden_size, output_size) # 定义损失函数和优化器 criterion = nn.CrossEntropyLoss() optimizer = torch.optim.SGD(model.parameters(), lr=0.01) # 定义训练数据 x_train = torch.tensor([[0, 0], [0, 1], [1, 0], [1, 1]], dtype=torch.float32) y_train = torch.tensor([0, 1, 1, 0], dtype=torch.long) # 训练模型 num_epochs = 10000 for epoch in range(num_epochs): # 前向传播 outputs = model(x_train) loss = criterion(outputs, y_train) # 反向传播和优化 optimizer.zero_grad() loss.backward() optimizer.step() # 每1000次迭代输出一次损失值 if (epoch+1) % 1000 == 0: print('Epoch [{}/{}], Loss: {:.4f}'.format(epoch+1, num_epochs, loss.item())) # 测试模型 with torch.no_grad(): outputs = model(x_train) _, predicted = torch.max(outputs.data, 1) print('Predicted:', predicted) ```
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值