keras api_功能性与顺序性Keras API

这篇博客探讨了Keras API的两种使用方式——功能性API和顺序性API,通过对比分析它们的特点,帮助读者理解如何在深度学习模型构建中选择合适的API。
摘要由CSDN通过智能技术生成

keras api

TensorFlow 2.0 provides a comprehensive ecosystem of tools for developers, enterprises, and researchers who want to push the state-of-the-art in machine learning and build scalable ML-powered applications. It is the single most sought after github repository. The need for effective ML is exploding.

TensorFlow 2.0为想要推动机器学习的最新技术并构建可扩展的ML驱动的应用程序的开发人员,企业和研究人员提供了一个全面的工具生态系统。 这是github存储库中最受欢迎的一个。 对有效ML的需求正在激增。

Keras is the most used deep learning library among the top 5 winning Kaggle competitions. Its intuitive manner of building such complex neural network architecture with so little ease has attracted many ML enthusisasts.

Keras是前5名获奖的Kaggle竞赛中使用最广泛的深度学习库。 它以如此简单的方式来构建这种复杂的神经网络架构的直观方式吸引了许多ML爱好者。

The keras core library is set around layers. A layer is an abstraction for doing computations, but essentially all it does is takes a number of tensors and outputs a number of tensors on the other side. If you’re not familiar with the idea of tensors, I recommend checking it up on the tensorflow homepage, youtube, medium or via online courses on Coursera, Udemy.

keras核心库围绕各层设置。 层是进行计算的抽象,但实际上它所做的只是获取多个张量,并在另一侧输出多个张量。 如果您不熟悉张量的概念,建议您在tensorflow主页,youtube,中号或通过Udemy的Coursera上的在线课程进行检查。

There are numerous built-in layers in keras, including convolutional, recurrent, dropout and so many more. Again, all of these can be found in the keras documentation. However, there’s also an alternative. Building custom layers. Let me show you how this works.

keras中有许多内置层,包括卷积,递归,辍学等等。 同样,所有这些都可以在keras文档中找到。 但是,还有另一种选择。 建立自定义图层。 让我告诉你这是如何工作的。

This is also referred to as layer subclassing. Notice that we passed the keras layer into our newly defined class. Each keras layer requires two functions: the initializer and the call method. First sets up the constructor, and the second calls the actual layer. W holds the weight matrix with dimensions input_dim*units.

这也称为图层子类化。 注意,我们将keras层传递到了我们新定义的类中。 每个keras层都需要两个函数:初始化程序和call方法。 首先设置构造函数,然后第二个调用实际层。 W持有权重矩阵,其维度为input_dim * units。

You can delay the weight creation by using the build method to define the weights. The build method is executed when the __call__ method is called, meaning the weights are only created only the layer is called with a specific input.

您可以通过使用build方法定义权重来延迟权重的创建。 当__call__方法时执行build方法,这意味着仅在使用特定输入调用图层时才创建权重。

The build method has a required argument input_shape, which can be used to define the shapes of the layer weights.

build方法具有必需的参数input_shape ,可用于定义图层权重的形状。

Image for post

List of layers are then fed to the model. Before using the sequential or functional API, I am going to show you another even lower level solution.

然后将图层列表输入模型。 在使用顺序或功能性API之前,我将向您展示另一个更低级别的解决方案。

Image for post

As you may have guessed, this is called model subclassing. We passed our linear layer as the final output of a small two-layer model.

您可能已经猜到了,这称为模型子类化。 我们将线性层作为小两层模型的最终输出。

顺序API (Sequential API)

The sequential model is the most abstract model designing architecture you can think of. It takes a list of layers and a name as arguments. It is built on the assumption that next layer’s inputs are the previous layer’s outputs. Keep in mind that idea as we move to the Functional API. Without further ado, let’s take a look at a model using the notorious Sequential API.

顺序模型是您可以想到的最抽象的模型设计体系结构。 它以图层列表和名称作为参数。 它基于以下假设:下一层的输入是上一层的输出。 当我们转向功能API时,请牢记这一想法。 事不宜迟,让我们看一下使用臭名昭著的Sequential API的模型。

Image for post

Here we are using a convolutional layer with 16 filters, with a 5 by 5 kernel, activation being relu, and we are also defining the input_shape followed by an average pooling layer, a flatten layer, and a single dense layer with 20 units. If all that confuses you, please check for more content via youtube or coursera.

在这里,我们使用具有16个滤镜的卷积层,内核为5 x 5,激活是relu,并且还定义了input_shape,随后是平均池化层,平坦层和具有20个单位的单个密集层。 如果这一切使您感到困惑,请通过youtube或Coursera检查更多内容。

功能性API (Functional API)

The Keras functional API is a way to create models that are more flexible than the tf.keras.Sequential API. The functional API can handle models with non-linear topology, shared layers, and even multiple inputs or outputs.

Keras功能性API是一种创建模型的方式,该模型比tf.keras.Sequential API更灵活。 功能性API可以处理具有非线性拓扑,共享层甚至多个输入或输出的模型。

The main idea is that a deep learning model is usually a directed acyclic graph (DAG) of layers. So the functional API is a way to build graphs of layers.

主要思想是,深度学习模型通常是层的有向无环图(DAG)。 因此,功能性API是一种构建层图的方法

Here is the same code for the model used above.

这是上面使用的模型的相同代码。

Image for post

The major difference is the input layer which takes in the input shape. The extra brackets after each layer represent their input tensors. Notice that the inputs layer doesn’t have any and there is no layer that would take the tensors of the last layer. Makes sense, right?

主要区别在于输入层采用输入形状。 每层之后的多余括号表示其输入张量。 请注意,输入层没有任何层,没有层会占用最后一层的张量。 有道理吧?

Then, we simply define the model with his inputs and outputs. The model compiling, training, evaluating and predicting remains the same as it would with the Sequential API.

然后,我们只需用他的输入和输出定义模型。 模型的编译,训练,评估和预测与Sequential API相同。

The main advantage of the Functional API is that it can handle multiple inputs, outputs and concatenation. It offers flexibility.

Functional API的主要优点是它可以处理多个输入,输出和串联。 它提供了灵活性。

Image for post

Our new model has two input layers, as well as two outputs. When defining a model, it is as simple as passing an array of data to the model inputs and outputs.

我们的新模型具有两个输入层以及两个输出。 定义模型时,就像将数据数组传递给模型输入和输出一样简单。

Compiling such a model can be done with a unified loss metrics or not. In this case, we are stating two different losses along with the loss weights argument. When computing the loss of the model’s prediction, the loss will be measured as:

是否可以使用统一的损失指标来编译这种模型。 在这种情况下,我们将说明两个不同的损失以及损失权重参数。 在计算模型预测的损失时,损失的计算方式为:

Image for post

Same goes for passing the arguments for validation and training data. Quite neat.

传递用于验证和训练数据的参数也是如此。 相当整洁。

For example, you simply cannot define a residual network with the Sequential API. Residual neural networks architecture is based upon utilizing skip connections, or shortcuts to jump over some layers. Typical ResNet models are implemented with double- or triple- layer skips that contain nonlinearities (ReLU) and batch normalization in between.

例如,您根本无法使用Sequential API定义残留网络。 残留神经网络架构基于利用跳过连接或跳过某些层的快捷方式。 典型的ResNet模型是通过包含非线性( ReLU )和介于两者之间的批量归一化的双层或三层跳过实现的。

Using keras.utils.plot_model and passing in the model, we get a pretty nice looking schema.

使用keras.utils.plot_model并传入模型,我们得到了一个非常漂亮的架构。

Image for post

In conclusion, keras truly is a machine learning powerhouse with endless possibilities. If all you require is a linear flow model, use the Sequential API. That’s why it was built. For complex architectures, such a residual neural network or an autoencoder, have fun with the Functional API. It’s up to you to create the next big thing. Whether you use the Functional or Sequential API or chose to define your own model, keras makes it super-easy.

总而言之,keras确实是一个拥有无限可能性的机器学习强国。 如果您只需要线性流模型,请使用顺序API。 这就是为什么它被建造。 对于复杂的体系结构,例如残留神经网络或自动编码器,请使用Functional API。 创建下一件大事完全取决于您。 无论您使用功能性API还是顺序性API,还是选择定义自己的模型,都可以使keras超级容易。

Disclaimer: content and some images are derived from the Customising your models with Tensorflow course by Imperial College London.

免责声明:内容和某些图像来自伦敦帝国学院的“使用Tensorflow定制模型”课程。

翻译自: https://medium.com/analytics-vidhya/functional-vs-sequential-keras-api-6d52ef3b86c1

keras api

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值