spring实战代码之解析CRON表达式

spring实战代码之解析CRON表达式

如果我们要用spring的定时器功能,那么一般会用CRON表达式,事实上,每个框架实现的CRON解析还不同,所以需要用相应的解析规则:

import org.springframework.scheduling.support.CronSequenceGenerator;

try {
	int size = 10;
	// 每月1号执行
	String cron = "0 0 0 1 1/1 ?";
	final CronSequenceGenerator g = new CronSequenceGenerator(cron);
	Date d = new Date();
	SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss");
	List<String> res = new ArrayList<>(size);
	for (int i = 0; i < size; i++) {
		d = g.next(d);
		res.add(sdf.format(d));
	}
	res.forEach(System.out::println);
} catch (Exception e) {
	e.printStackTrace();
}

以上代码返回10条符合表达式的时间:

2020-05-01 00:00:00
2020-06-01 00:00:00
2020-07-01 00:00:00
2020-08-01 00:00:00
2020-09-01 00:00:00
2020-10-01 00:00:00
2020-11-01 00:00:00
2020-12-01 00:00:00
2021-01-01 00:00:00
2021-02-01 00:00:00
RNN (Recurrent Neural Network) is a type of neural network that is designed to process sequential data such as time-series data or natural language text. PyTorch is a popular deep learning framework that provides easy-to-use tools for building and training RNN models. In PyTorch, you can define an RNN model using the `nn.RNN` module. The `nn.RNN` module takes in the input sequence and the initial hidden state as inputs, and returns the output sequence and the final hidden state as outputs. Here is an example of how to define an RNN model in PyTorch: ``` python import torch import torch.nn as nn # Define the RNN model class RNN(nn.Module): def __init__(self, input_size, hidden_size, num_layers, output_size): super(RNN, self).__init__() self.hidden_size = hidden_size self.num_layers = num_layers self.rnn = nn.RNN(input_size, hidden_size, num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, output_size) def forward(self, x): # Initialize the hidden state h0 = torch.zeros(self.num_layers, x.size(0), self.hidden_size).to(x.device) # Forward pass through the RNN layer out, h = self.rnn(x, h0) # Pass the output through the fully-connected layer out = self.fc(out[:, -1, :]) return out ``` In this example, we define an RNN model with one RNN layer, one fully-connected layer, and a hidden state size of 128. The `input_size` is the size of the input feature vector, `num_layers` is the number of RNN layers, and `output_size` is the size of the output vector. In the `forward` method, we initialize the hidden state to all zeros, pass the input sequence `x` through the RNN layer, and then pass the final output through the fully-connected layer to obtain the final output. To train this model, you can use the standard PyTorch training loop with a loss function such as mean squared error (MSE) or cross-entropy loss, and an optimizer such as stochastic gradient descent (SGD) or Adam.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值