AI网络之最简编写方法:autokeras

44 篇文章 0 订阅
3 篇文章 0 订阅

偶然翻keras翻到的封装版autokeras,按照分类器做:
https://autokeras.com/tutorial/overview/
包括图像、文本、结构化数据的分类、回归;

还有定制模块等等,一应俱全;

1. 4行代码图像分类距离

"""
autokeras ,keras ecology
Target: construct a fast & simple nn
"""

import autokeras as ak
from tensorflow.keras.datasets import mnist
import numpy as np


(x_train, y_train), (x_test, y_test) = mnist.load_data(path="./mnist.npz")
print(x_train.shape, y_train.shape, )

# # 2 可以设定不同得输入结构,如hotkey,带channel;
# x_train = x_train.reshape(x_train.shape + (1,))
# x_test = x_test.reshape(x_test.shape + (1,))
#   
#   # absolute 2 hotkey, fast way: eye(10 val)索引方法(即1个值,变为一行值)
# eye = np.eye(10)
# y_train = eye[y_train]
# y_test = eye[y_test]

# 1 simple 1
auk = ak.ImageClassifier(max_trials=3)   #网络构建,可以有模型数目、类型等等; 尝试模型次数为1 加快速度;
auk.fit(x_train, y_train, epochs=10) #validation_split 指定分裂比例

rst = auk.predict(x_test)
print(rst)

print(auk.evaluate(x_test, y_test))

2. 说明和通道、hotkey改

  • 除了print以外:一共4行可以实现数据加载、网络构建、训练、预测整个步骤;简直惊呆了
  • 而且可以配置max_trials自动将比较art先进的网络结构vinilla、resnet之类训练一遍(多大迭代就训练几个网络,默认10个);性能强,很小白;
  • 除了参数比较多的修改余地(ak.ImageClass, fit那两句参数可以扩展很多),注释取消,可以适配输入的channel,由自然数字类到hotkey分类转换(矩阵映射,每一个v值,映射到一行np.array的0,1序列; 我这里是10类,故用了10*10的eye矩阵做索引映射)

3. 利用model_export保存模型

# 3 save model:
model = auk.export_model()
print(type(model))
try:
    model.save("model_auk_mnist", save_formt = "tf")
except:
    model.save("model_auk_mnist.h5")

结果: 98.9%
This message will be only logged once.
1875/1875 [] - 8s 4ms/step - loss: 0.1584 - accuracy: 0.9521
Epoch 2/10
1875/1875 [
] - 8s 4ms/step - loss: 0.0731 - accuracy: 0.9775
Epoch 3/10
1875/1875 [] - 8s 4ms/step - loss: 0.0590 - accuracy: 0.9818
Epoch 4/10
1875/1875 [
] - 8s 4ms/step - loss: 0.0516 - accuracy: 0.9834
Epoch 5/10
1875/1875 [] - 8s 4ms/step - loss: 0.0445 - accuracy: 0.9857
Epoch 6/10
1875/1875 [
] - 8s 4ms/step - loss: 0.0422 - accuracy: 0.9862
Epoch 7/10
1875/1875 [] - 8s 4ms/step - loss: 0.0375 - accuracy: 0.9881
Epoch 8/10
1875/1875 [
] - 8s 4ms/step - loss: 0.0343 - accuracy: 0.9890
Epoch 9/10
1875/1875 [] - 7s 4ms/step - loss: 0.0311 - accuracy: 0.9899
Epoch 10/10
1875/1875 [
] - 7s 4ms/step - loss: 0.0315 - accuracy: 0.9898
WARNING:tensorflow:From /root/anaconda3/envs/tensorflow2.0/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Model.state_updates (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
2020-12-05 18:20:25.503247: W tensorflow/python/util/util.cc:348] Sets are not currently considered sequences, but this may change in the future, so consider avoiding using them.
WARNING:tensorflow:From /root/anaconda3/envs/tensorflow2.0/lib/python3.7/site-packages/tensorflow/python/training/tracking/tracking.py:111: Layer.updates (from tensorflow.python.keras.engine.base_layer) is deprecated and will be removed in a future version.
Instructions for updating:
This property should not be used in TensorFlow 2.0, as updates are applied automatically.
[[0. 0. 0. … 1. 0. 0.]
[0. 0. 1. … 0. 0. 0.]
[0. 1. 0. … 0. 0. 0.]

[0. 0. 0. … 0. 0. 0.]
[0. 0. 0. … 0. 0. 0.]
[0. 0. 0. … 0. 0. 0.]]
313/313 [==============================] - 1s 2ms/step - loss: 0.0384 - accuracy: 0.9889
[0.038361262530088425, 0.9889000058174133]

4. 利用load_model取代ak.ImageClassifier进行增量训练

import autokeras as ak
from tensorflow.keras.datasets import mnist
import numpy as np


(x_train, y_train), (x_test, y_test) = mnist.load_data(path="./mnist.npz")
print(x_train.shape, y_train.shape, )

# 2 可以设定不同得输入结构,如hotkey,带channel;
x_train = x_train.reshape(x_train.shape + (1,))
x_test = x_test.reshape(x_test.shape + (1,))

  # absolute 2 hotkey, fast way: eye(10 val)索引方法(即1个值,变为一行值)
eye = np.eye(10)
y_train = eye[y_train]
y_test = eye[y_test]

# 1 simple 1
#auk = ak.ImageClassifier(max_trials=1)   #网络构建,可以有模型数目、类型等等; 尝试模型次数为1 加快速度;
   # 4 load model
from tensorflow.keras.models import load_model
auk = load_model("model_auk_mnist.h5", custom_objects=ak.CUSTOM_OBJECTS)

auk.fit(x_train, y_train, epochs=10) #validation_split 指定分裂比例

rst = auk.predict(x_test)
print("some results", rst[0:5])

print(auk.evaluate(x_test, y_test))

results:

Modify $PATH to customize ptxas location.
This message will be only logged once.
1875/1875 [] - 6s 3ms/step - loss: 0.0306 - accuracy: 0.9901
Epoch 2/10
1875/1875 [
] - 6s 3ms/step - loss: 0.0294 - accuracy: 0.9905
Epoch 3/10
1875/1875 [] - 6s 3ms/step - loss: 0.0265 - accuracy: 0.9913
Epoch 4/10
1875/1875 [
] - 6s 3ms/step - loss: 0.0257 - accuracy: 0.9916
Epoch 5/10
1875/1875 [] - 6s 3ms/step - loss: 0.0252 - accuracy: 0.9912
Epoch 6/10
1875/1875 [
] - 7s 3ms/step - loss: 0.0224 - accuracy: 0.9923
Epoch 7/10
1875/1875 [] - 6s 3ms/step - loss: 0.0218 - accuracy: 0.9925
Epoch 8/10
1875/1875 [
] - 6s 3ms/step - loss: 0.0220 - accuracy: 0.9927
Epoch 9/10
1875/1875 [] - 6s 3ms/step - loss: 0.0201 - accuracy: 0.9932
Epoch 10/10
1875/1875 [
] - 6s 3ms/step - loss: 0.0213 - accuracy: 0.9931
some results [[7.6293778e-13 4.7462640e-15 6.1769159e-09 1.7457960e-08 1.7195214e-14
9.4353697e-14 6.8693101e-21 1.0000000e+00 8.8252903e-12 8.1154390e-09]
[1.1431418e-09 1.1018636e-09 9.9999976e-01 2.5623555e-12 3.4857134e-14
1.9685324e-14 1.9073372e-07 1.5593536e-14 8.8467931e-12 4.1756676e-17]
[8.6852507e-09 9.9994111e-01 7.3585187e-07 8.6860457e-09 4.0101510e-05
5.1157974e-07 1.7664402e-07 5.9231743e-06 1.1472413e-05 1.3919924e-08]
[9.9999225e-01 9.2479785e-19 1.4545560e-07 4.0172986e-11 3.3431049e-14
3.3715644e-09 7.6826454e-06 8.6051782e-12 1.7302542e-09 2.4808584e-09]
[3.1395352e-11 9.3180358e-15 1.4311107e-12 1.4205627e-14 9.9999928e-01
3.0552855e-13 1.0188720e-13 2.6126143e-11 2.6243022e-10 7.7013647e-07]]
313/313 [==============================] - 1s 2ms/step - loss: 0.0373 - accuracy: 0.9900
[0.03733522817492485, 0.9900000095367432]

  • 注意第一轮就是从99%开始训练的,而非第一次的95%的训练精度;
  • 最后结果略有提升,从98.9%提升到99.0%,很有限;
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值