v$metric和v$metric_history的区别

10g的doc居然没有查到这2个视图的解释...

这2个视图的结构完全相同,数据到底有什么区别?

[@more@]

一下是11gR2中这个视图的解释:

V$METRIC displays the most recent statistic values for the complete set of metrics captured by the AWR infrastructure.

V$METRIC_HISTORY displays all the available statistic values for the complete set of metrics captured by the AWR infrastructure.

这里提到V$METRIC 显示了recent的数据,而V$METRIC_HISTORY显示了all,查询一下发现recent=1分钟,all表示至实例启动以来:

SQL> select min(begin_time),max(end_time) from v$metric;

MIN(BEGIN_TIME) MAX(END_TIME)
------------------- -------------------
2009/10/12 23:02:30 2009/10/12 23:14:01

SQL> select min(begin_time),max(end_time) from v$metric_history
2 ;

MIN(BEGIN_TIME) MAX(END_TIME)
------------------- -------------------
2009/10/12 23:02:30 2009/10/12 23:14:10

SQL> select startup_time from v$instance;

STARTUP_TIME
-------------------
2009/10/12 23:02:14

SQL>

来自 “ ITPUB博客 ” ,链接:http://blog.itpub.net/19602/viewspace-1027790/,如需转载,请注明出处,否则将追究法律责任。

转载于:http://blog.itpub.net/19602/viewspace-1027790/

import numpy as np import tensorflow as tf from tensorflow import keras import matplotlib.pyplot as plt ## Let us define a plt function for simplicity def plt_loss(x,training_metric,testing_metric,ax,colors = ['b']): ax.plot(x,training_metric,'b',label = 'Train') ax.plot(x,testing_metric,'k',label = 'Test') ax.set_xlabel('Epochs') ax.set_ylabel('Accuarcy')# ax.set_ylabel('Categorical Crossentropy Loss') plt.legend() plt.grid() plt.show() tf.keras.utils.set_random_seed(1) ## We import the Minist Dataset using Keras.datasets (train_data, train_labels), (test_data, test_labels) = keras.datasets.mnist.load_data() ## We first vectorize the image (28*28) into a vector (784) train_data = train_data.reshape(train_data.shape[0],train_data.shape[1]*train_data.shape[2]) # 60000*784 test_data = test_data.reshape(test_data.shape[0],test_data.shape[1]*test_data.shape[2]) # 10000*784 ## We next change label number to a 10 dimensional vector, e.g., 1->[0,1,0,0,0,0,0,0,0,0] train_labels = keras.utils.to_categorical(train_labels,10) test_labels = keras.utils.to_categorical(test_labels,10) ## start to build a MLP model N_batch_size = 5000 N_epochs = 100 lr = 0.01 # ## we build a three layer model, 784 -> 64 -> 10 MLP_3 = keras.models.Sequential([ keras.layers.Dense(64, input_shape=(784,),activation='relu'), keras.layers.Dense(10,activation='softmax') ]) MLP_3.compile( optimizer=keras.optimizers.Adam(lr), loss= 'categorical_crossentropy', metrics = ['accuracy'] ) History = MLP_3.fit(train_data,train_labels, batch_size = N_batch_size, epochs = N_epochs,validation_data=(test_data,test_labels), shuffle=False) train_acc = History.history['accuracy'] test_acc = History.history['val_accuracy']模仿此段代码,写一个双隐层感知器(输入层784,第一隐层128,第二隐层64,输出层10)
06-01
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值