区间合并类动态规划训练

啦啦又来发题解了~
今天我们练习了区间合并型的动态规划(一股小学生记叙文的气息扑面而来~
于是三个小时三道题~
还算不错?
1. 数字游戏(dgame)
【题目描述】丁丁最近沉迷于一个数字游戏之中。这个游戏看似简单,但丁丁在研究了许多天之后却发觉原来在简单的规则下想要赢得这个游戏并不那么容易。游戏是这样的,在你面前有一圈整数(一共n个),你要按顺序将其分为m个部分,各部分内的数字相加,相加所得的m个结果对10取模后再相乘,最终得到一个数k。游戏的要求是使你所得的k最大或者最小。
这个很正常的区间合并
先预处理tot[i,j]表示从i到j总共和
f[i,j,p]表示从i开始j个数(包括j)分成k块最大值(最小值同理
f[i,j,p]:=max(f[i,j,p],f[i,k,p-1]*tot[i+k,i+j-1]);
这样就可以AC啦

代码


const shuru='dgame.in';
      shuchu='dgame.out';
var   a:array[0..100] of longint;
      f,g:array[0..100,0..100,0..10] of longint;
      tot:array[0..100,0..100] of longint;
      ans,n,m,i,j,k,p:longint;
function max(a,b:longint):longint;
begin
	if a>b then exit(a);
	exit(b);
end;
function min(a,b:longint):longint;
begin
	if a<b then exit(a);
	exit(b);
end;
procedure init;
begin
	assign(input,shuru);
	assign(output,shuchu);
	reset(input);
	rewrite(output);
	readln(n,m);
	for i:=1 to n do
		readln(a[i]);
	for i:=n+1 to (n shl 1) do
		a[i]:=a[i-n];
	for i:=1 to (n shl 1) do
		for j:=i to (n shl 1) do
			begin
				for k:=i to j do
					tot[i,j]:=tot[i,j]+a[k];
				tot[i,j]:=tot[i,j] mod 10;
				while tot[i,j]<0 do
					tot[i,j]:=tot[i,j]+10;
			end;
    dec(m);
	close(input);
    for i:=1 to 100 do
        for j:=1 to 100 do
            for k:=1 to 10 do
                g[i,j,k]:=maxlongint;
	for i:=1 to n
  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
这里是一个基于深度残差网络和分位数回归结合的区间预测代码。请注意,这只是一个示例代码,具体实现可能因数据集和模结构而异。 ```python import numpy as np import tensorflow as tf from tensorflow.keras.layers import Input, Dense, Dropout, Flatten, Conv1D, Add, Lambda from tensorflow.keras.models import Model from tensorflow.keras.optimizers import Adam # 构建深度残差网络 def resnet_block(x, filters, kernel_size, activation='relu', dropout_rate=0.2): # 残差路径 res = Conv1D(filters=filters, kernel_size=kernel_size, padding='same')(x) res = Dropout(dropout_rate)(res) res = Conv1D(filters=filters, kernel_size=kernel_size, padding='same')(res) # 主路径 x = Conv1D(filters=filters, kernel_size=kernel_size, padding='same')(x) x = Dropout(dropout_rate)(x) x = Conv1D(filters=filters, kernel_size=kernel_size, padding='same')(x) # 合并残差路径和主路径 x = Add()([res, x]) x = Activation(activation)(x) return x def build_resnet(input_shape, output_shape, filters, kernel_size, activation='relu', dropout_rate=0.2): # 输入层 inputs = Input(shape=input_shape) # 残差块 x = resnet_block(inputs, filters, kernel_size, activation, dropout_rate) x = resnet_block(x, filters, kernel_size, activation, dropout_rate) x = resnet_block(x, filters, kernel_size, activation, dropout_rate) # 输出层 outputs = Dense(output_shape, activation='linear')(x) # 模 model = Model(inputs=inputs, outputs=outputs) return model # 构建分位数回归模 def build_quantile_loss(q): def quantile_loss(y_true, y_pred): error = y_true - y_pred return tf.keras.backend.mean(tf.keras.backend.maximum(q * error, (q - 1) * error), axis=-1) return quantile_loss def build_quantile_model(input_shape, output_shape, filters, kernel_size, activation='relu', dropout_rate=0.2, quantiles=[0.1, 0.5, 0.9]): # 初始化模列表 models = [] # 构建多个分位数回归模 for q in quantiles: model = build_resnet(input_shape, output_shape, filters, kernel_size, activation, dropout_rate) model.compile(loss=build_quantile_loss(q), optimizer=Adam()) models.append(model) return models # 训练 def train_model(X_train, y_train, models, batch_size=32, epochs=100): # 训练每个分位数回归模 for model in models: model.fit(X_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.1, shuffle=True) # 预测分位数 def predict_quantiles(X_test, models): # 预测每个分位数 y_preds = [] for model in models: y_preds.append(model.predict(X_test)) # 整理预测结果 y_preds = np.array(y_preds) y_preds = np.transpose(y_preds, (1, 2, 0)) return y_preds # 构建区间预测模 def build_interval_model(input_shape, output_shape, filters, kernel_size, activation='relu', dropout_rate=0.2, quantiles=[0.1, 0.5, 0.9]): # 构建多个分位数回归模 models = build_quantile_model(input_shape, output_shape, filters, kernel_size, activation, dropout_rate, quantiles) # 输入层 inputs = Input(shape=input_shape) # 预测分位数 preds = predict_quantiles(inputs, models) # 计算区间 lower = Lambda(lambda x: x[:, :, 0])(preds) upper = Lambda(lambda x: x[:, :, -1])(preds) # 输出层 outputs = tf.stack([lower, upper], axis=-1) # 模 model = Model(inputs=inputs, outputs=outputs) return model ``` 在这个代码中,`build_resnet`函数构建了一个深度残差网络模,`build_quantile_model`函数构建了多个分位数回归模,`train_model`函数用于训练,`predict_quantiles`函数用于预测分位数,`build_interval_model`函数将分位数回归模和区间预测模结合起来,构建了一个完整的区间预测模

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值