浙江大学PAT_甲级_1096. Consecutive Factors (20)

题目链接:点击打开链接

Among all the factors of a positive integer N, there may exist several consecutive numbers. For example, 630 can be factored as 3*5*6*7, where 5, 6, and 7 are the three consecutive numbers. Now given any positive N, you are supposed to find the maximum number of consecutive factors, and list the smallest sequence of the consecutive factors.

Input Specification:

Each input file contains one test case, which gives the integer N (1<N<231).

Output Specification:

For each test case, print in the first line the maximum number of consecutive factors. Then in the second line, print the smallest sequence of the consecutive factors in the format "factor[1]*factor[2]*...*factor[k]", where the factors are listed in increasing order, and 1 is NOT included.

Sample Input:
630
Sample Output:
3
5*6*7
我的C++代码:

#include<iostream>
#include<vector>
using namespace std;
int main()
{
	long long n=2;
	cin >> n;
	vector<int> finally_factors(1,n), temp_factors;  //finally_factors(1,n)最终因子,目前只有一个元素n
	int component_n;//n的分解部分
	for (long long i = 2; i*i <= n; i++)  //从2开始,求因子
	{
		temp_factors.clear();//清除旧的因子
		component_n = n;
		for (long long j = i; j <=31; j++)  //循环求因子
		{
			if (component_n%j == 0)//被整除,是因子
			{
				temp_factors.push_back(j);//记录下来
				component_n = component_n /j;//component_n =商
			}
			else
				break;//不被整除,因子不连续,则结束
		}
		if (temp_factors.size()>finally_factors.size())  //保留最长的因子集合
			finally_factors = temp_factors;
		else if (temp_factors.size() == finally_factors.size() && temp_factors[0]<finally_factors[0])//保留最小序列
			finally_factors = temp_factors;
	}
	cout << finally_factors.size() << endl;  //输出连续因子个数
	cout << finally_factors[0];//输出连续因子
	for (int i = 1; i<finally_factors.size(); i++)
		cout << "*" << finally_factors[i];
	//system("pause");
	return 0;
}


import torch import torch.nn as nn import torch.optim as optim import numpy as np 定义基本循环神经网络模型 class RNNModel(nn.Module): def init(self, rnn_type, input_size, hidden_size, output_size, num_layers=1): super(RNNModel, self).init() self.rnn_type = rnn_type self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.num_layers = num_layers self.encoder = nn.Embedding(input_size, hidden_size) if rnn_type == 'RNN': self.rnn = nn.RNN(hidden_size, hidden_size, num_layers) elif rnn_type == 'GRU': self.rnn = nn.GRU(hidden_size, hidden_size, num_layers) self.decoder = nn.Linear(hidden_size, output_size) def forward(self, input, hidden): input = self.encoder(input) output, hidden = self.rnn(input, hidden) output = output.view(-1, self.hidden_size) output = self.decoder(output) return output, hidden def init_hidden(self, batch_size): if self.rnn_type == 'RNN': return torch.zeros(self.num_layers, batch_size, self.hidden_size) elif self.rnn_type == 'GRU': return torch.zeros(self.num_layers, batch_size, self.hidden_size) 定义数据集 with open('汉语音节表.txt', encoding='utf-8') as f: chars = f.readline() chars = list(chars) idx_to_char = list(set(chars)) char_to_idx = dict([(char, i) for i, char in enumerate(idx_to_char)]) corpus_indices = [char_to_idx[char] for char in chars] 定义超参数 input_size = len(idx_to_char) hidden_size = 256 output_size = len(idx_to_char) num_layers = 1 batch_size = 32 num_steps = 5 learning_rate = 0.01 num_epochs = 100 定义模型、损失函数和优化器 model = RNNModel('RNN', input_size, hidden_size, output_size, num_layers) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) 训练模型 for epoch in range(num_epochs): model.train() hidden = model.init_hidden(batch_size) loss = 0 for X, Y in data_iter_consecutive(corpus_indices, batch_size, num_steps): optimizer.zero_grad() hidden = hidden.detach() output, hidden = model(X, hidden) loss = criterion(output, Y.view(-1)) loss.backward() torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0) optimizer.step() if epoch % 10 == 0: print(f"Epoch {epoch}, Loss: {loss.item()}")请正确缩进代码
最新发布
06-09
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值