keras 定义激活函数及layer

自定义激活函数

通过定义个操作 Tensor 的函数,然后将其添加到 keras 系统中即可。

from keras.utils.generic_utils import get_custom_objects

def binary(x):
    # 注: tf.greater 函数没有 gradient 函数,因此在运行时候会报错
    # ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient 
    # defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
    return K.cast(tf.greater(x, 0), tf.uint8)

# custom activation function
get_custom_objects().update({'binary': Activation(binary)})

自定义层

在 keras 层中除去输入层以外,不能使用没有定义 gradient 函数的函数,否则会报错。

因此如下层只能做 Input 层使用:

class MaskLayer(Layer):
    """
    输入与输出维度一致
    输出为 0/1
    """
    def __init__(self, **kwargs):
        self.weight_m = None
        super(MaskLayer, self).__init__(**kwargs)

    def build(self, input_shape):
        self.weight_m = self.add_weight(name='m_weight',
                                        shape=(input_shape[-1], input_shape[-1]),
                                        initializer='glorot_uniform',
                                        trainable=True)
        self.weight_b = self.add_weight(name='m_bias',
                                        shape=(input_shape[-1], ),
                                        initializer='glorot_uniform',
                                        trainable=True)
        super(MaskLayer, self).build(input_shape)

    def call(self, x, **kwargs):
        m = K.dot(x, self.weight_m) + self.weight_b
        m = tf.layers.batch_normalization(m)

        # 注: tf.greater_equal 函数没有 gradient 函数,因此在运行时候会报错
        # ValueError: An operation has `None` for gradient. Please make sure that all of your ops have a gradient 
        # defined (i.e. are differentiable). Common ops without gradient: K.argmax, K.round, K.eval.
        return K.cast(K.greater_equal(x, 0), tf.uint8)

    def compute_output_shape(self, input_shape):
        return input_shape

对于Layer的定义也可以通过 Activation 定义的方式一样,通过 get_custom_objects 的方式添加到 keras 系统中去。

自定义初始化函数

from keras import backend as K
import numpy as np

def my_init(shape, name=None):
    value = np.random.random(shape)
    return K.variable(value, name=name)

model.add(Dense(64, init=my_init))

这里主要能初始化的内容是 float 数据类型

获取上层输出维度

简单的通过 python 命令行,查看下 Layer 属性相关内容:

>>> from tensorflow.keras.layers import Input, Dense
/srv/anaconda/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`.
  from ._conv import register_converters as _register_converters
>>> inp = Input(shape=(3,))
>>> inp
<tf.Tensor 'input_1:0' shape=(?, 3) dtype=float32>
>>> inp.shape
TensorShape([Dimension(None), Dimension(3)])
>>> m = Dense(10)(inp)
>>> m.shape
TensorShape([Dimension(None), Dimension(10)])

在 Layer 定义过程中使用上层输出来做为本层的维度定义, 比如:

from keras.models import Model
from keras.layers import Input, Dense, Conv2D, Flatten, MaxPool2D

inp = Input(shape=(20, 20, 1, ))
m = Conv2D(16, kernel_size=(3, inp.shape[2].value))(inp)
m = MaxPool2D(pool_size=(m.shape[1].value, 1))(m)
m = Flatten()(m)
outp = Dense(m.shape[-1].value)(m)
model = Model(inputs=inp, outputs=outp)
model.summary()

输出如下内容:

_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
input_4 (InputLayer)         (None, 20, 20, 1)         0         
_________________________________________________________________
conv2d_6 (Conv2D)            (None, 18, 1, 16)         976       
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 1, 1, 16)          0         
_________________________________________________________________
flatten (Flatten)            (None, 16)                0         
_________________________________________________________________
dense_1 (Dense)              (None, 16)                272       
=================================================================
Total params: 1,248
Trainable params: 1,248
Non-trainable params: 0
_________________________________________________________________
  • 0
    点赞
  • 7
    收藏
    觉得还不错? 一键收藏
  • 2
    评论

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值