DB_CACHE_SIZE参数定义

Property Description
 

Parameter type

Big integer

Syntax

DB_CACHE_SIZE = integer [K | M |G]

Default value

If SGA_TARGET is set: If the parameter is not specified, then the default is 0 (internally determined by the Oracle Database). If the parameter is specified, then the user-specified value indicates a minimum value for the memory pool.If SGA_TARGET is not set, then the default is either 48 MB or 4MB * number of CPUs * granule size, whichever is greater

Modifiable

ALTER SYSTEM

Basic

No



 

DB_CACHE_SIZE specifies the size of the DEFAULT buffer pool for buffers with the primary block size (the block size defined by the DB_BLOCK_SIZE initialization parameter).

The value must be at least 4M * numb

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,这里是一个用 Python 实现的五层人工神经网络的代码,每层的神经元数量分别为12, 10, 10, 10, 8: ```python import numpy as np # 定义神经网络的结构 layer_sizes = [12, 10, 10, 10, 8] # 初始化权重和偏置项 def initialize_parameters(layer_sizes): parameters = {} for l in range(1, len(layer_sizes)): parameters['W' + str(l)] = np.random.randn(layer_sizes[l], layer_sizes[l-1]) * 0.01 parameters['b' + str(l)] = np.zeros((layer_sizes[l], 1)) return parameters # 定义激活函数 def sigmoid(z): return 1 / (1 + np.exp(-z)) # 前向传播 def forward_propagation(X, parameters): caches = [] A = X L = len(parameters) // 2 for l in range(1, L): Z = np.dot(parameters['W' + str(l)], A) + parameters['b' + str(l)] A = sigmoid(Z) caches.append((Z, A)) Z = np.dot(parameters['W' + str(L)], A) + parameters['b' + str(L)] AL = sigmoid(Z) caches.append((Z, AL)) return AL, caches # 计算成本 def compute_cost(AL, Y): m = Y.shape[1] cost = -np.sum(np.multiply(Y, np.log(AL)) + np.multiply(1-Y, np.log(1-AL))) / m return cost # 反向传播 def backward_propagation(AL, Y, caches): grads = {} L = len(caches) m = AL.shape[1] dAL = - (np.divide(Y, AL) - np.divide(1 - Y, 1 - AL)) current_cache = caches[L-1] dZL = dAL * sigmoid(current_cache[0]) * (1 - sigmoid(current_cache[0])) grads['dW' + str(L)] = np.dot(dZL, current_cache[1].T) / m grads['db' + str(L)] = np.sum(dZL, axis=1, keepdims=True) / m dA = np.dot(parameters['W' + str(L)].T, dZL) for l in reversed(range(L-1)): current_cache = caches[l] dZ = dA * sigmoid(current_cache[0]) * (1 - sigmoid(current_cache[0])) grads['dW' + str(l+1)] = np.dot(dZ, current_cache[1].T) / m grads['db' + str(l+1)] = np.sum(dZ, axis=1, keepdims=True) / m dA = np.dot(parameters['W' + str(l+1)].T, dZ) return grads # 更新参数 def update_parameters(parameters, grads, learning_rate): L = len(parameters) // 2 for l in range(1, L+1): parameters['W' + str(l)] = parameters['W' + str(l)] - learning_rate * grads['dW' + str(l)] parameters['b' + str(l)] = parameters['b' + str(l)] - learning_rate * grads['db' + str(l)] return parameters # 训练模型 def train_model(X, Y, layer_sizes, num_iterations, learning_rate): parameters = initialize_parameters(layer_sizes) for i in range(num_iterations): AL, caches = forward_propagation(X, parameters) cost = compute_cost(AL, Y) grads = backward_propagation(AL, Y, caches) parameters = update_parameters(parameters, grads, learning_rate) if i % 1000 == 0: print("Cost after iteration {}: {}".format(i, cost)) return parameters # 测试模型 def predict(X, parameters): AL, caches = forward_propagation(X, parameters) predictions = (AL > 0.5) return predictions # 生成数据集 X = np.random.randn(12, 100) Y = np.random.randint(2, size=(8, 100)) # 训练模型 parameters = train_model(X, Y, layer_sizes, 10000, 0.01) # 测试模型 predictions = predict(X, parameters) ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值