encog(一)

encog库主要是一个包含神经网络算法的java库。

最基本的前馈神经网络BasicNetwork类是前馈神经网络。

以下是BasicNetwork类的方法

voidaddLayer(Layer layer)
向网络中添加新层
voidaddWeight(int fromLayer, int fromNeuron, int toNeuron, double value)   
改变权重。 W[第fromLayer+1层](fromNeuron+1,toNeuronr+2)=W[第fromLayer+1层](fromNeuron+1,toNeuronr+2)+value

network.addWeight(0, 1, 2, -1);  //将第一层的第二个神经元连接到第二层的第三个神经元的权重增加-1

doublecalculateError(MLDataSet data)
Calculate the error for this neural network.
intcalculateNeuronCount()
Calculate the total number of neurons in the network across all layers.
intclassify(MLData input)
Classify the input into a group.
voidclearContext()
Clear any data from any context layers.
Objectclone()
Return a clone of this neural network.
voidcompute(double[] input, double[] output)
Compute the output for this network.
MLDatacompute(MLData input)
Compute the output for a given input to the neural network.
voiddecodeFromArray(double[] encoded)
Decode an array to this object.
StringdumpWeights() 
voidenableConnection(int fromLayer, int fromNeuron, int toNeuron, boolean enable)
关闭或开启网络中的某个连接
intencodedArrayLength()
voidencodeToArray(double[] encoded)
Encode the object to the specified array.
booleanequals(BasicNetwork other, int precision)
Determine if this neural network is equal to another.
booleanequals(Object other)
Compare the two neural networks.
ActivationFunctiongetActivation(int layer)
Get the activation function for the specified layer.
StringgetFactoryArchitecture()
StringgetFactoryType()
FlatNetworkgetFlat()
intgetInputCount()
doublegetLayerBiasActivation(int l)
Get the bias activation for the specified layer.
intgetLayerCount() 
intgetLayerNeuronCount(int l)
Get the neuron count.
doublegetLayerOutput(int layer, int neuronNumber)
Get the layer output for the specified neuron.
intgetLayerTotalNeuronCount(int l)
Get the total (including bias and context) neuron cont for a layer.
intgetOutputCount()
NeuralStructuregetStructure() 
doublegetWeight(int fromLayer, int fromNeuron, int toNeuron)
Get the weight between the two layers.
inthashCode()
Generate a hash code.
booleanisConnected(int layer, int fromNeuron, int toNeuron)
Determine if the specified connection is enabled.
booleanisLayerBiased(int l)
Determine if the specified layer is biased.
voidreset()
随机设置网络权重
voidreset(int seed)
通过随机数种子随机设置网络权重
voidsetBiasActivation(double activation)
Sets the bias activation for every layer that supports bias.
voidsetLayerBiasActivation(int l, double value)
Set the bias activation for the specified layer.
voidsetWeight(int fromLayer, int fromNeuron, int toNeuron, double value)
Set the weight between the two specified neurons.
StringtoString()
voidupdateProperties()
Update any objeccts when a property changes.
voidvalidateNeuron(int targetLayer, int neuron)
Validate the the specified targetLayer and neuron are valid.
intwinner(MLData input)
Determine the winner for the specified input.
 

 BasicLayer类是最基本的层

 

构造函数
BasicLayer(ActivationFunction activationFunction, boolean hasBias, int neuronCount)
第一个参数:激活函数类型
第二个参数:是否有偏置节点
第三个参数:神经元个数
BasicLayer(int neuronCount)
Construct this layer with a sigmoid activation function.

 

返回值成员函数
ActivationFunctiongetActivationFunction() 
BasicNetworkgetNetwork() 
intgetNeuronCount() 
voidsetNetwork(BasicNetwork network)
Set the network for this layer.
import org.encog.Encog;
import org.encog.engine.network.activation.ActivationSigmoid;
import org.encog.ml.data.MLData;
import org.encog.ml.data.MLDataPair;
import org.encog.ml.data.MLDataSet;
import org.encog.ml.data.basic.BasicMLDataSet;
import org.encog.neural.networks.BasicNetwork;
import org.encog.neural.networks.layers.BasicLayer;
import org.encog.neural.networks.training.propagation.resilient.ResilientPropagation;

/**
神经网络实现异或逻辑
 */
public class text {

    /**
     * The input necessary for XOR.
     */
    public static double XOR_INPUT[][] = { { 0.0, 0.0 }, { 1.0, 0.0 },
            { 0.0, 1.0 }, { 1.0, 1.0 } };

    /**
     * The ideal data necessary for XOR.
     */
    public static double XOR_IDEAL[][] = { { 0.0 }, { 1.0 }, { 1.0 }, { 0.0 } };
    
    /**
     * The main method.
     * @param args No arguments are used.
     */
    public static void main(final String args[]) {
        
        // create a neural network, without using a factory
        BasicNetwork network = new BasicNetwork();
        network.addLayer(new BasicLayer(null,true,2));//输入层
        network.addLayer(new BasicLayer(new ActivationSigmoid(),true,3));//隐层1
        network.addLayer(new BasicLayer(new ActivationSigmoid(),true,3));//隐层2
        network.addLayer(new BasicLayer(new ActivationSigmoid(),false,1));//输出层
        network.getStructure().finalizeStructure();
        network.reset();
        
        // create training data
        MLDataSet trainingSet = new BasicMLDataSet(XOR_INPUT, XOR_IDEAL);
        
        // train the neural network
        final ResilientPropagation train = new ResilientPropagation(network, trainingSet);

        int epoch = 1;

        do {
            train.iteration();
            System.out.println("Epoch #" + epoch + " Error:" + train.getError());
            epoch++;
        } while(train.getError() > 0.01);
        train.finishTraining();

        // test the neural network
        System.out.println("Neural Network Results:");
        for(MLDataPair pair: trainingSet ) {
            final MLData output = network.compute(pair.getInput());
            System.out.println(pair.getInput().getData(0) + "," + pair.getInput().getData(1)
                    + ", actual=" + output.getData(0) + ",ideal=" + pair.getIdeal().getData(0));
        }
        
        Encog.getInstance().shutdown();
    }
}

 

转载于:https://www.cnblogs.com/codeDog123/p/6753593.html

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值