Sigmoid function optimization
Abstract
Neural Networks is an emerging security paradigm. As neural networks are increasingly widely used in various fields, many applications for edge mobile devices have to use neural networks for computation. However, due to the restrictions on computational resources and computing resources in the edge, we usually need to optimize the neural networks before they can be applied on edge devices. In this article, I describe how to optimize a neural network by optimizing the Sigmoid activation function.
1. Introduction
The S i g m o i d Sigmoid Sigmoid function, also called the logistic function, is used for the output of the hidden layer neurons and takes values in the range ( 0 , 1 ) (0,1) (0,1), which can map a real number to the interval ( 0 , 1 ) (0,1) (0,1) and can be used for binary classification. S i g m o i d Sigmoid Sigmoid is smooth and easy to derive.
The
S
i
g
m
o
i
d
Sigmoid
Sigmoid function is defined by the following equation:
s
i
g
m
o
i
d
(
x
)
=
1
1
+
e
−
x
sigmoid(x) = \frac{1}{1+e^{-x} }
sigmoid(x)=1+e−x1
Its derivative with respect to
x
x
x can be expressed in terms of itself as
S
′
(
x
)
=
e
−
x
(
1
+
e
−
x
)
2
=
S
(
x
)
(
1
−
S
(
x
)
)
S^{\prime}(x)=\frac{e^{-x}}{\left(1+e^{-x}\right)^{2}}=S(x)(1-S(x))
S′(x)=(1+e−x)2e−x=S(x)(1−S(x))
In the S i g m o i d Sigmoid Sigmoid function, there is an e x p exp exp function and a d i v i s i o n division division, both calculations are very cumbersome and time consuming , which is very unfriendly to edge devices with limited resources. So it is a good idea to optimize S i g m o i d Sigmoid Sigmoid in terms of e x p exp exp function and d i v i s i o n division division
2. Optimization Analysis
s
i
g
m
o
i
d
(
x
)
=
1
1
+
e
−
x
sigmoid(x) = \frac{1}{1+e^{-x} }
sigmoid(x)=1+e−x1
⇓
\Downarrow
⇓
1
y
=
1
+
e
−
x
\frac{1}{y} = 1+e^{-x}
y1=1+e−x
⇓
\Downarrow
⇓
1
y
−
1
=
e
−
x
\frac{1}{y}-1 = e^{-x}
y1−1=e−x
⇓
\Downarrow
⇓
l
n
(
1
y
−
1
)
=
−
x
ln(\frac{1}{y}-1)=-x
ln(y1−1)=−x
⇓
\Downarrow
⇓
−
l
n
(
1
y
−
1
)
=
x
-ln(\frac{1}{y}-1)=x
−ln(y1−1)=x
⇓
\Downarrow
⇓
f
−
1
(
y
)
=
−
l
n
(
1
y
+
−
1
)
f^{-1}(y) = -ln(\frac{1}{y}+-1)
f−1(y)=−ln(y1+−1)
# The original code
if Sigmoid(P[4]) > threshold then
do smothing
# The new code
if P[4] > desigmoid_threshold
do smothing
3. Algorithm
def sigmoid(x):
return 1 / (1 + math.exp(-x))
def desigmoid(x):
return -math.log(1 / y - 1)
4. Future work
Although the computational efficiency has been greatly improved after d e s i g m o i d desigmoid desigmoid optimization, exponential overflow can easily occur when the data volume become large, and we need to find an effective way to solve this problem. I am now using l o g log log function to smooth the data, which reduces the probability of overflow to some extent, but I need to find a more efficient method to solve exponential overflow problem in the future.