Encog3Java-User.pdf翻译:第四章 使用Java构建神经网络

第三章是关于Encog的图形界面应用程序,我没有下载下来。所以没翻译。


Chapter 4

第四章


Constructing Neural Networks in Java
使用Java构建神经网络


? Constructing a Neural Network
? Activation Functions
? Encog Persistence
? Using the Encog Analyst from Code
This chapter will show how to construct feedforward and simple recurrent neural networks with Encog and how to save these neural networks for later use. Both of these neural network types are created using the BasicNetwork and BasicLayer classes. In addition to these two classes, activation functions are also used. The role of activation functions will be discussed as well.
本章将展示如何用Encog构建前馈和简单递归神经网络,以及如何保存这些神经网络,以便后续使用。这两个神经网络类型创建使用BasicNetwork和BasicLayer类。除了这两个类之外,还使用了激活函数。激活函数的作用也将会讨论。


Neural networks can take a considerable amount of time to train. Because of this it is important to save your neural networks. Encog neural networks can be persisted using Java’s built-in serialization. This persistence can also be achieved by writing the neural network to an EG file, a cross-platform text file. This chapter will introduce both forms of persistence.
神经网络可能会花费相当多的时间来训练。正因为如此,保存你的神经网络是很重要的。Encog神经网络可以使用java内置的序列化来持久化。这种持久性也可以通过将神经网络写入一个跨平台文本文件EG文件来实现。本章将介绍持久性的两种形式。


In the last chapter, the Encog Analyst was used to automatically normalize data. The Encog Analyst can also automatically create neural networks based on CSV data. This chapter will show how to use the Encog analyst to create neural networks from code.
在上一章中,Encog Analyst进行自动规范数据。Encog Analyst还可以自动创建基于CSV数据神经网络。本章将展示如何使用Encog Analyst创建神经网络。


4.1 Constructing a Neural Network
4.1 构建神经网络


A simple neural network can quickly be created using BasicLayer and BasicNetwork objects. The following code creates several BasicLayer objects with a default hyperbolic tangent activation function.
一个简单的神经网络可以使用BasicNetwork和BasicLayer对象迅速创建。下面的代码创建一个默认的双曲正切函数的几个BasicLayer对象(我下载的是3.4版本的Encog代码,看代码里默认激活函数是ActivationSigmoid,不是双曲正切函数ActivationTANH)。


BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer(2));
network.addLayer(new BasicLayer(3));
network.addLayer(new BasicLayer(1));
network.getStructure().finalizeStructure();
network.reset();
This network will have an input layer of two neurons, a hidden layer with three neurons and an output layer with a single neuron. To use an activation function other than the hyperbolic tangent function, use code similar to the following:
这个网络将有两个神经元的输入层,三个神经元的隐藏层,和一个神经元的输出层。要使用除双曲正切函数以外的激活函数,使用与下面类似的代码:


BasicNetwork network = new BasicNetwork();
network.addLayer(new BasicLayer( null , true , 2 ));
network.addLayer(new BasicLayer(new ActivationSigmoid(), true , 3 ));
network.addLayer(new BasicLayer(new ActivationSigmoid(), false , 1 ));
network.getStructure().finalizeStructure();
network.reset();
The sigmoid activation function is passed to the AddLayer calls for the hidden and output layer. The true value that was also introduced specifies that the BasicLayer should have a bias neuron. The output layer does not have bias neurons, and the input layer does not have an activation function. This is because the bias neuron affects the next layer, and the activation function affects data coming from the previous layer.
调用addlayer方法,Sigmoid函数传递给隐层和输出层。指定的BasicLayer要有一个偏置神经元。输出层没有偏置神经元,输入层不具有激活功能。这是因为偏置神经元影响下一层,激活函数影响来自前一层的数据。


Unless Encog is being used for something very experimental, always use a bias neuron. Bias neurons allow the activation function to shift off the origin of zero. This allows the neural network to produce a zero value even when the inputs are not zero. The following URL provides a more mathematical justification for the importance of bias neurons:
除非encog被用于一些实验,否则总是用一个偏置神经元。偏置神经元允许激活函数转移原点零点。这使得神经网络即使在输入不为零时也能产生零值。下面的URL为偏置神经元的重要性提供了更多的数学依据:


http://www.heatonresearch.com/wiki/Bias
Activation functions are attached to layers and used to scale data output from a layer. Encog applies a layer’s activation function to the data that the layer is about to output. If an activation function is not specified for BasicLayer, the hyperbolic tangent activation will be defaulted.
激活函数连接到层,用于缩放来自层的数据输出。Encog将层的激活函数应用到此层将要输出的数据上。如果一个BasicLayer没有指定函数,双曲正切激活函数是缺省函数(我下载的是3.4版本的Encog代码,看代码里默认激活函数是ActivationSigmoid,不是双曲正切函数ActivationTANH)。


It is also possible to create context layers. A context layer can be used to create an Elman or Jordan style neural networks. The following code could be used to create an Elman neural network.
还可以创建上下文层。上下文层可以用来创建一个Elman或Jordan风格的神经网络。下面的代码可以用来创建一个Elman神经网络。


BasicLayer input , hidden ;
BasicNetwork network = new BasicNetwork();
network.addLayer( input = new BasicLayer(1));
network.addLayer( hidden = new BasicLayer(2));
network.addLayer(new BasicLayer(1));
input.setContextFedBy( hidden );
network.getStructure().finalizeStructure();
network.reset();
Notice the hidden.setContextFedBy line? This creates a context link from the output layer to the hidden layer. The hidden layer will always be fed the output from the last iteration. This creates an Elman style neural network.Elman and Jordan networks will be introduced in Chapter 7.
注意到hidden.setContextFedBy那一行了吗?这将创建一个从输出层到隐藏层的上下文链接。隐藏层将始终被从最后一次迭代输出。这将创建一个Elman风格的神经网络。Elman和Jordan网络将在第7章中介绍。


4.2 The Role of Activation Functions
4.2 激活函数的角色


The last section illustrated how to assign activation functions to layers. Activation functions are used by many neural
  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 0
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值