Problem N-14 Adding Reversed Numbers

Description

The Antique Comedians of Malidinesia prefer comedies to tragedies. Unfortunately, most of the ancient plays are tragedies. Therefore the dramatic advisor of ACM has decided to transfigure some tragedies into comedies. Obviously, this work is very hard because the basic sense of the play must be kept intact, although all the things change to their opposites. For example the numbers: if any number appears in the tragedy, it must be converted to its reversed form before being accepted into the comedy play.

Reversed number is a number written in arabic numerals but the order of digits is reversed. The first digit becomes last and vice versa. For example, if the main hero had 1245 strawberries in the tragedy, he has 5421 of them now. Note that all the leading zeros are omitted. That means if the number ends with a zero, the zero is lost by reversing (e.g. 1200 gives 21). Also note that the reversed number never has any trailing zeros.

ACM needs to calculate with reversed numbers. Your task is to add two reversed numbers and output their reversed sum. Of course, the result is not unique because any particular number is a reversed form of several numbers (e.g. 21 could be 12, 120 or 1200 before reversing). Thus we must assume that no zeros were lost by reversing (e.g. assume that the original number was 12).


Input



The input consists of N cases. The first line of the input contains only positive integer N. Then follow the cases. Each case consists of exactly one line with two positive integers separated by space. These are the reversed numbers you are to add.


Output



For each case, print exactly one line containing only one integer - the reversed sum of two reversed numbers. Omit any leading zeros in the output.


Sample Input



3
24 1
4358 754
305 794


Sample Output



34
1998

1

题目介绍
输入两个数,先反转(个位变首位,依次倒过来)再相加再反转,输出反转以后的数字
解题思路
设置一个函数,让输入的数据反转,然后让两数相加,再让结果反转,最后输出
源代码
#include<bits/stdc++.h>
using namespace std;
long long int revs(long long int a)
{
    long long int b=0;
    while(a)
    {
        b=b*10+a%10;
        a/=10;
    }
    return b;
}
int main()
{
    long long int a=0,b=0;
    int T=0;
    cin>>T;
    while (T--)
    {
        cin>>a>>b;
        a=revs(a)+revs(b);
        cout<<revs(a)<<endl;
    }
    return 0;
}

也可以把数字当成字符串来看待,一位一位的区分,相加的时候设置进位标志,同样能达到相同的效果

### 使用 LSTM 模型处理 NSL-KDD 数据集 #### 架构设计 对于NSL-KDD数据集的入侵检测,采用CNN-LSTM模型是一种有效的方法。这种架构最初被称为长期卷积神经网络(Long-term Recurrent Convolutional Network),即LRCN模型;不过,在此上下文中更常用的是其泛化形式——CNN-LSTM模型[^1]。 #### 特征提取与预处理 为了使LSTM能够更好地捕捉到序列中的模式变化,通常先通过卷积层对输入数据进行特征提取。由于NSL-KDD是一个表格型数据集而非图像数据,可以考虑将原始特征映射成适合时间序列分析的形式,比如按照会话划分或将连续的时间戳转换为固定长度的时间窗口表示法。这一步骤有助于提高后续RNN/LSTM单元的学习效率并增强模型鲁棒性。 #### 实现细节 下面给出一段Python代码片段用于构建一个简单的Keras/TensorFlow实现: ```python import numpy as np from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense, Dropout, Embedding, LSTM, Bidirectional, TimeDistributed, Conv1D, MaxPooling1D, Flatten from sklearn.preprocessing import StandardScaler from keras.utils.np_utils import to_categorical def preprocess_data(X_train_raw, y_train_raw): scaler = StandardScaler() X_scaled = scaler.fit_transform(X_train_raw) # Reshape input data into (samples, timesteps, features) n_samples = len(X_scaled) time_steps = 1 # For simplicity; could be adjusted based on dataset characteristics. feature_dim = X_scaled.shape[-1] X_reshaped = X_scaled.reshape((n_samples, time_steps, feature_dim)) Y_cat = to_categorical(y_train_raw) return X_reshaped, Y_cat model = Sequential() # Add a convolution layer before the LSTM layers for better performance and learning of local patterns within each timestep's set of features. model.add(Conv1D(filters=64, kernel_size=3, activation='relu', input_shape=(time_steps,feature_dim))) model.add(MaxPooling1D(pool_size=2)) # Adding an LSTM Layer with dropout regularization after it. model.add(LSTM(100, return_sequences=True)) model.add(Dropout(0.5)) # Optionally add another Bi-directional LSTM layer here. # Final output dense layer that matches number of classes in your problem space. model.add(TimeDistributed(Dense(num_classes, activation="softmax"))) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) X_train_preprocessed, Y_train_preprocessed = preprocess_data(X_train,y_train) history = model.fit( X_train_preprocessed, Y_train_preprocessed, batch_size=batch_size, epochs=num_epochs, validation_split=validation_ratio ) ``` 这段代码展示了如何创建一个带有前置一维卷积层的基础版双向LSTM分类器,并对其进行了初步配置以便于训练过程中的超参数调整。需要注意的是,这里假设`num_classes`, `batch_size`, `num_epochs`, 和 `validation_ratio` 已经被定义好。 #### 讨论 虽然上述方法可以在某些情况下取得不错的结果,但在实践中可能会遇到一些局限性。例如,当尝试将在特定环境下收集的数据集中训练得到的模型应用于其他环境时,性能可能显著下降。这是因为不同环境中正常行为和攻击类型的分布可能存在差异,从而影响了模型的泛化能力[^2]。
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值