Lake Couting_poj2386_p32

#include<stdio.h>
#include<iostream>
using namespace std;
#include<string>

#define MAX 100
int N, M;
char field[MAX][MAX];

void dfs(int x, int y)
{
	field[x][y] = '.';
	for (int dx = -1; dx <= 1; dx++)
	{
		for (int dy = -1; dy <= 1; dy++)
		{
			int nx = x + dx, ny = y + dy;
			if (nx >= 0 && nx < N &&
				ny >= 0 && ny < M && field[nx][ny] == 'W')
				dfs(nx, ny);
		}
	}
	return;
}

int main()
{
	while (~scanf("%d %d", &N, &M))
	{
		int res = 0;
		memset(field, '0', sizeof(field));
		for (int i = 0; i < N; i++)
		{
			for (int j = 0; j < M; j++)
			{
				cin >> field[i][j];
			}
		}

		for (int i = 0; i < N; i++)
		{
			for (int j = 0; j < M; j++)
			{
				if (field[i][j] == 'W')
				{
					dfs(i, j);
					res++;
				}
			}
		}
		printf("%d\n", res);
	}
	
	//getchar();
	return 0;
}

 

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
当然可以,以下是一个基于PyTorch的LSTM多步预测代码的示例: ```python import torch import torch.nn as nn import numpy as np # 定义LSTM模型 class LSTM(nn.Module): def __init__(self, input_size, hidden_size, output_size): super(LSTM, self).__init__() self.hidden_size = hidden_size self.lstm = nn.LSTM(input_size, hidden_size) self.linear = nn.Linear(hidden_size, output_size) def forward(self, input): lstm_out, _ = self.lstm(input.view(len(input), 1, -1)) output = self.linear(lstm_out.view(len(input), -1)) return output[-1] # 定义训练函数 def train(model, optimizer, loss_fn, train_data, train_labels): model.train() optimizer.zero_grad() output = model(train_data) loss = loss_fn(output, train_labels) loss.backward() optimizer.step() return loss.item() # 定义测试函数 def test(model, test_data, test_labels): model.eval() with torch.no_grad(): output = model(test_data) loss = torch.nn.functional.mse_loss(output, test_labels) return loss.item() # 定义数据集 data = np.sin(np.arange(1000) * np.pi / 50) train_data = data[:800] train_labels = data[1:801] test_data = data[800:-1] test_labels = data[801:] # 定义模型和优化器 model = LSTM(1, 10, 1) optimizer = torch.optim.Adam(model.parameters(), lr=0.01) loss_fn = nn.MSELoss() # 训练模型 for epoch in range(1000): train_loss = train(model, optimizer, loss_fn, train_data, train_labels) test_loss = test(model, test_data, test_labels) if epoch % 100 == 0: print(f"Epoch {epoch}, Train Loss: {train_loss:.4f}, Test Loss: {test_loss:.4f}") # 使用模型进行预测 with torch.no_grad(): predictions = [] input = torch.tensor(test_data[0]).float() for i in range(len(test_data)): output = model(input) predictions.append(output.item()) input = output predictions = np.array(predictions) # 绘制预测结果 import matplotlib.pyplot as plt plt.plot(test_labels, label="True Data") plt.plot(predictions, label="Predictions") plt.legend() plt.show() ``` 这个代码使用了一个单层的LSTM模型,输入是一个数值序列,输出也是一个数值序列,用于多步预测。在训练过程中,使用均方误差作为损失函数,使用Adam优化器进行优化。最终,我们可以使用训练好的模型对测试集进行预测,并将预测结果与真实结果进行比较。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值