Halcon classification——(四)MLP

      百家号:Halcon Classification——(四)MLP(多层神经网络)

     Neural nets directly determine the separating hyperplanes between the classes. For two classes the hyperplane actually separates the feature vectors of the two classes, i.e., the feature vectors that lie on one side of the plane are assigned to class 1 and the feature vectors that lie on the other side of the plane are assigned to class 2. In contrast to this, for more than two classes the planes are chosen such that the feature vectors of the correct class have the largest positive distance of all feature vectors from the plane.

    A linear classifier can be built, e.g., using a neural net with a single layer like shown in figure 3.4 (a,b). There, so-called processing units (neurons) first compute the linear combinations of the feature vectors and the network weights and then apply a nonlinear activation function.

    A classification with single-layer neural nets needs linearly separable classes, which is not sufficient in many classification applications. To get a classifier that can separate also classes that are not linearly separable, you can add more layers, so-called hidden layers, to the net. The obtained multi-layer neural net (see figure 3.4, c) then consists of an input layer, one or several hidden layers and an output layer. Note that one hidden layer is sufficient to approximate any separating hypersurface and any output function with values in [0,1] as long as the hidden layer has a sufficient number of processing units.

   Figure 3.4: Neural networks: single-layered for (a) two classes and (b) n classes, (c) multi-layered: (from left to right)
input layer, hidden layer, output layer.
  Within the neural net, the processing units of each layer compute the linear combination of the
feature vector or of the results from a previous layer.

That is, each processing unit first computes its activation as a linear combination of the input values:

Then the results are passed through a nonlinear activation function:

With HALCON, for the hidden units the activation function is the hyperbolic tangent function:

    For the output function (when using the MLP for classification) the softmax activation function is used, which
maps the output values into the range (0, 1) such that they add up to 1:

    To derive the separating hypersurfaces for a classification using a multi-layer neural net, the network weights
have to be adjusted. This is done by a training. That is, data with known output is inserted to the input layer
and processed by the hidden units. The output is then compared to the expected output. If the output does not
correspond to the expected output (within a certain error tolerance), the weights are incrementally adjusted so that
the error is minimized. Note that the weight adjustment using HALCON is realized by a very stable numeric
algorithm that leads to better results than obtained by the classical back propagation algorithm.

The MLP method works for classification of general features, image segmentation, and OCR. Note that MLP can
also be used for least squares fitting (regression) and for classification problems with multiple independent logical
attributes.
     An MLP can have more than one hidden layer and is then considered as a deep learning method. In HALCON
we only have a single hidden layer implemented in our MLPs. That is why, whenever we refer to deep learning
methods, we exclude the MLP method.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值