框架学习之道:PE框架简介

1.PE框架开发新功能所需的部分

PE框架简介

2.PE框架工作流程(重要)

首先根据<transcation>中的id号,找到模板(template),然后再根据模板找到责任链(chain),一旦确认chain就按照流程图执行,从chain中执行command,执行到deletegatecommand后结束,跳到模板,再去执行<action>,然后跳转到相应的jsp页面。流程图如下:

PE框架简介

3.各部分简介:
 

    无论一个交易的发送渠道是HTTP还是TCP,最终针对每一个渠道的Adapter会将请求的FormHTTP)或报文(TCP)转换成与渠道无关的Context。当渠道Adapter将数据转换为渠道无关的Context后,将控制权交给PowerEngine核心控制模块,根据该交易的交易Id,来确认该交易需要经过的处理过程。

v      首先执行Chain中的一系列Commands,若有其中一个Command认为需结束处理,则处理立刻结束;

v      Chain执行到Delegate Command时,开始执行Template

不同的 Template 会调用不同的 Actions ,以完成实际的交易处理
 
v

Context是整个 Power Engine 框架的数据交换核心接口类。

v      可通过GetData方法来访问交易请求的数据要素,通过SetData方法来返回数据。

v      HTTP 开发中,有 HttpServletContext实现类,在基于 TCP/Socket 开发中有 TcpContext ,无论一个交易的发送渠道是HTTP还是TCP,最终会将请求的FormHTTP)或报文(TCP)转换成与渠道无关的Context

 

style:样式,用于判断输入数据的格式等。

 

chain:系统级的交易逻辑抽象,如:交易的权限、登陆控制、日志和输入检查等

common:chain下的操作,理解为命令(?)

template: 对一组有类似处理流程的交易处理的抽象,定义交易内部的执行流程,如:复核交易、查询交易

      Action:PowerEngine业务处理的最小单元,Action也是具体单个应用开发者需要直接面对的对象,通过Action实现交易单元处理,是交易的具体动作

 

    sqlmap 是一个SQL 射入工具,使得sql语句与程序代码分离


  • 4
    点赞
  • 18
    收藏
    觉得还不错? 一键收藏
  • 3
    评论
以下是使用PyTorch实现Transformer的代码: ```python import torch import torch.nn as nn import torch.nn.functional as F class MultiHeadAttention(nn.Module): def __init__(self, heads, d_model): super().__init__() self.heads = heads self.d_model = d_model self.d_head = d_model // heads self.q_linear = nn.Linear(d_model, d_model) self.v_linear = nn.Linear(d_model, d_model) self.k_linear = nn.Linear(d_model, d_model) self.out = nn.Linear(d_model, d_model) def forward(self, q, k, v, mask=None): bs = q.size(0) # Linear projections k = self.k_linear(k).view(bs, -1, self.heads, self.d_head) q = self.q_linear(q).view(bs, -1, self.heads, self.d_head) v = self.v_linear(v).view(bs, -1, self.heads, self.d_head) # Transpose and dot product attention k = k.transpose(1,2) q = q.transpose(1,2) v = v.transpose(1,2) scores = torch.matmul(q, k.transpose(-2, -1)) / math.sqrt(self.d_head) if mask is not None: mask = mask.unsqueeze(1) scores = scores.masked_fill(mask == 0, -1e9) scores = F.softmax(scores, dim=-1) # Output attention output = torch.matmul(scores, v) # Concatenate and linear projection output = output.transpose(1,2).contiguous().view(bs, -1, self.d_model) return self.out(output) class PositionwiseFeedforward(nn.Module): def __init__(self, d_model, d_ff=2048): super().__init__() self.linear1 = nn.Linear(d_model, d_ff) self.linear2 = nn.Linear(d_ff, d_model) def forward(self, x): x = self.linear1(x) x = F.relu(x) x = self.linear2(x) return x class EncoderLayer(nn.Module): def __init__(self, d_model, heads, dropout=0.1): super().__init__() self.norm_1 = nn.LayerNorm(d_model) self.norm_2 = nn.LayerNorm(d_model) self.attn = MultiHeadAttention(heads, d_model) self.ff = PositionwiseFeedforward(d_model) self.dropout_1 = nn.Dropout(dropout) self.dropout_2 = nn.Dropout(dropout) def forward(self, x, mask): x2 = self.norm_1(x) x = x + self.dropout_1(self.attn(x2, x2, x2, mask)) x2 = self.norm_2(x) x = x + self.dropout_2(self.ff(x2)) return x class TransformerEncoder(nn.Module): def __init__(self, input_dim, d_model, heads, num_layers): super().__init__() self.input_dim = input_dim self.d_model = d_model self.heads = heads self.num_layers = num_layers self.embedding = nn.Embedding(input_dim, d_model) self.pe = PositionalEncoder(d_model) self.layers = nn.ModuleList([EncoderLayer(d_model, heads) for _ in range(num_layers)]) def forward(self, src_seq, src_mask): x = self.embedding(src_seq) x = self.pe(x) for i in range(self.num_layers): x = self.layers[i](x, src_mask) return x class PositionalEncoder(nn.Module): def __init__(self, d_model, dropout=0.1, max_len=5000): super().__init__() self.dropout = nn.Dropout(p=dropout) pe = torch.zeros(max_len, d_model) position = torch.arange(0, max_len, dtype=torch.float).unsqueeze(1) div_term = torch.exp(torch.arange(0, d_model, 2).float() * (-math.log(10000.0) / d_model)) pe[:, 0::2] = torch.sin(position * div_term) pe[:, 1::2] = torch.cos(position * div_term) pe = pe.unsqueeze(0).transpose(0, 1) self.register_buffer('pe', pe) def forward(self, x): x = x + self.pe[:x.size(0), :] return self.dropout(x) class Transformer(nn.Module): def __init__(self, input_dim, output_dim, d_model, heads, num_layers, dropout=0.1): super().__init__() self.encoder = TransformerEncoder(input_dim, d_model, heads, num_layers) self.fc = nn.Linear(d_model, output_dim) self.dropout = nn.Dropout(dropout) def forward(self, src_seq, src_mask): x = self.encoder(src_seq, src_mask) x = x.mean(dim=1) x = self.fc(x) return x ``` 以上是使用PyTorch实现Transformer的代码,其中包括了Transformer的编码器、多头自注意力机制、位置编码等模块。需要注意的是,该代码中使用了Layer Normalization进行层归一化。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 3
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值