优秀网站记录

 

1 JAVA

1.1 总结

java总结:https://thinkwon.blog.csdn.net/article/details/103592572?utm_medium=distribute.pc_relevant_t0.none-t

 

1.2 框架

sa-token权限认证框架:http://sa-token.dev33.cn/doc/#/

 

1.3 工具

行为验证码:https://gitee.com/anji-plus/captcha

通过oshi获取硬件信息:https://www.cnblogs.com/songxingzhu/p/9107878.html

小众JVM脚本语言:https://github.com/killme2008/aviatorscript

JCEF:https://blog.csdn.net/u013642500/article/details/103003284

mapdb内存数据库  http://www.mapdb.org

FtpGo(一个go版本的FTP,FTPS, WebDAV 部署方便,墙裂推荐)   https://github.com/drakkan/sftpgo 

 

 

 

1.4 java 加密混淆

  ClassFinal 对 java 代码进行混淆加密    https://blog.csdn.net/qingquanyingyue/article/details/108475301

 一款上手即用的Java代码混淆工具  https://blog.csdn.net/qq_27574367/article/details/105930348

 

 

2 网络

2.1 网络安全

https://blog.csdn.net/Eastmount/article/details/99683207

2.2 网络配置

https://bbs.kafan.cn/thread-2175407-1-1.html

 

3 智能问答与知识图谱

聊天机器人:https://bot.chatopera.com/dashboard/clients/5f62c12381eb84001179f6a3/sdkdl

百度问答框架:https://gitee.com/baidu/AnyQ

语义相似度:https://github.com/IceFlameWorm/TextPair

百度分词与词性标注:https://github.com/baidu/lac

2019-06-17问答系统项目落地调研    https://lookme.blog.csdn.net/article/details/93968703 

 

 

 

4 chrome 插件下载

国内下载插件网站1(比较全):https://crxdl.com/

国内下载插件网站2:https://www.extfans.com/

国内下载插件网站3:http://www.cnplugins.com/

国内下载插件网站4:https://huajiakeji.com/

 

5 法律法规网站

中国法院网:https://www.chinacourt.org/law/searchproc.shtml?keyword=%E5%8C%96%E5%AD%A6%E5%93%81&t=2&law_type_id=MzAwNEAFAA%3D%3D

法律法规库:http://search.chinalaw.gov.cn/search2.html

生态环境部:https://www.mee.gov.cn/wjk/

 

6 软件查找推荐网站

windows软件推荐: LANGS - Windows Apps That Amaze Us (gitbook.io)

adobe软件相关: https://www.sheui.com/Tools/828.html
 

7 注册与激活码

axure9 :https://www.axure8.com/htm/2020061641.html

Axure RP 9.0.0.3687 正式版(目前最新版)(已拿202006月新下载版本测试有效)

Licensee :Freecrackdownload.com

KEY:5vYpJgQZ431X/G5kp6jpOO8Vi3TySCBnAslTcNcKkszfPH7jaM4eKM8CrALBcEC1

 

 

 

  • 0
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 2
    评论
好的,以下是用 PyTorch 定义 MLP 模型,训练测试模型并记录优秀模型的相关代码: ``` import torch import torch.nn as nn import torch.optim as optim from torch.utils.data import DataLoader # 定义MLP模型 class MLP(nn.Module): def __init__(self, input_dim, hidden_dim, output_dim): super(MLP, self).__init__() self.fc1 = nn.Linear(input_dim, hidden_dim) self.fc2 = nn.Linear(hidden_dim, output_dim) def forward(self, x): x = nn.functional.relu(self.fc1(x)) x = nn.functional.softmax(self.fc2(x), dim=1) return x # 定义训练函数 def train(model, train_dataloader, criterion, optimizer): model.train() train_loss = 0 for i, (inputs, targets) in enumerate(train_dataloader): optimizer.zero_grad() outputs = model(inputs) loss = criterion(outputs, targets) loss.backward() optimizer.step() train_loss += loss.item() train_loss = train_loss / len(train_dataloader) return train_loss # 定义测试函数 def test(model, test_dataloader, criterion): model.eval() test_loss = 0 num_correct = 0 with torch.no_grad(): for i, (inputs, targets) in enumerate(test_dataloader): outputs = model(inputs) test_loss += criterion(outputs, targets).item() _, predicted = torch.max(outputs.data, 1) num_correct += (predicted == targets).sum().item() test_loss = test_loss / len(test_dataloader) accuracy = num_correct / len(test_dataloader.dataset) return test_loss, accuracy # 定义训练过程 def train_process(model, train_dataloader, test_dataloader, criterion, optimizer, n_epochs, save_path): best_test_loss = float('inf') for epoch in range(n_epochs): train_loss = train(model, train_dataloader, criterion, optimizer) test_loss, accuracy = test(model, test_dataloader, criterion) print("Epoch: {}, Train Loss: {:.6f}, Test Loss: {:.6f}, Accuracy: {:.6f}".format(epoch, train_loss, test_loss, accuracy)) if test_loss < best_test_loss: torch.save(model.state_dict(), save_path) print("Save model, test loss: {:.6f}".format(test_loss)) best_test_loss = test_loss # 定义数据集 class MyDataset(torch.utils.data.Dataset): def __init__(self, inputs, targets): self.inputs = inputs self.targets = targets def __len__(self): return len(self.targets) def __getitem__(self, idx): input = torch.FloatTensor(self.inputs[idx]) target = torch.LongTensor([self.targets[idx]]) return input, target # 参数设定 input_dim = 10 hidden_dim = 100 output_dim = 2 n_epochs = 10 batch_size = 64 learning_rate = 0.001 # 模型、数据和优化器初始化 model = MLP(input_dim, hidden_dim, output_dim) train_inputs = torch.randn(1000, input_dim) train_targets = (torch.rand(1000) * 2).long() test_inputs = torch.randn(100, input_dim) test_targets = (torch.rand(100) * 2).long() train_dataset = MyDataset(train_inputs, train_targets) test_dataset = MyDataset(test_inputs, test_targets) train_dataloader = DataLoader(train_dataset, batch_size=batch_size, shuffle=True) test_dataloader = DataLoader(test_dataset, batch_size=batch_size, shuffle=False) criterion = nn.CrossEntropyLoss() optimizer = optim.Adam(model.parameters(), lr=learning_rate) # 开始训练 train_process(model, train_dataloader, test_dataloader, criterion, optimizer, n_epochs, "model.pth") ``` 上述代码中,我们定义了一个简单的 MLP 模型,以及训练和测试函数。在数据初始化之后,我们设定了若干超参数,如学习率、训练次数等。然后我们初始化了数据和优化器,依次进入训练和测试循环,在训练完成后,保存效果最好的模型到 "model.pth"。 希望这个回答对您有所帮助!

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 2
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值