OT源代码的分析,OrtHello 迟早攻破你 (一)OTTween类参透

首先是OT包括了_Base , Graphics , Tweening 三个文件夹,


其先我们说说Tweening ,其实就是ITweens的OrtHello版本,非常容易理解,如果只是使用2D移动的话不妨考虑使用这个方法,在代码中是叫做OTTween的

OTTween 接受三个参数
1 object owner
    Object that will have the 'public' properties we are going to tween.

   

2 float duration

    Duration of this tween.


3  OTEase easing

    'Default' Easing function of this tween. ( get using OTEasing )

使用方法:

new OTTween(GetComponent<OTSprite>(), 1f, OTEasing.ElasticOut).
    Tween("size", new Vector2(80, 80)).
    Tween("tintColor", new Color(0.5f + Random.value * 0.5f, 0.5f + Random.value * 0.5f, 0.5f + Random.value * 0.5f), OTEasing.StrongOut);
在你创建了OTTween之后,你必须用下面的方式使用它们
    Tween( string property, object fromValue, object toValue, OTEase easing )
来看看它的重载函数:
   Tween(string var, object fromValue, object toValue)  默认的 this.easing = OTEasing.Linear;
   Tween(string var, object toValue, OTEase easing, OTEase pongEasing)  后面一个属性是Easing when 'ponging'
   Tween(string var, object toValue)
   Tween(string var, object toValue, OTEase easing) //可能比较常用的
   Tween(string var, object fromValue, object toValue, OTEase easing)
   Tween(string var, object fromValue, object toValue, OTEase easing, OTEase pongEasing)
设置基本的属性,和要达到的目标
    TweenAdd(string var, object addValue)
    TweenAdd(string property, object addValue, OTEase easing )
    TweenAdd(string var, object addValue, OTEase easing, OTEase pongEasing)

    Wait ( float waitTime )
设置等待时间=,=相当于cocos2d中的CCdelaytime

    TweenVar(object fromValue, object toValue, OTEase easing, FieldInfo field, PropertyInfo prop)
某些值的改变,可改变的值为:single,int,double,vector2,vector3 ,color
    Stop()
立刻停止某个动作
     InitCallBacks(Component target)
    {
        callBackTargets.Add(target);
    }
回调函数的设置
 
关键是看看Update的工作:
 
   public bool Update(float deltaTime)
    {    
        if (_doStop)
        {
            _running = false;
            return true;
        }
        
        if (waitTime>0)
        {
            waitTime -= Time.deltaTime;
            if (waitTime>0) return false;
        }
        if (vars.Count==0) return false;
        _running = true;
        
        time+=deltaTime;
        if (time > duration) time = duration;
        for (int v=0; v<vars.Count; v++)
        {
            OTEase easing = this.easing;
            if (easings[v] != null)
                easing = easings[v];
            TweenVar(fromValues[v],toValues[v],easing,fields[v],props[v]);
        }
        if (time == duration)
        {
            _running = false;
            if (onTweenFinish != null)
                onTweenFinish(this);
            if (!CallBack("onTweenFinish", new object[] { this }))
                CallBack("OnTweenFinish", new object[] { this });
            return true;
        }
        else
            return false;
    }

好了  大概明白OTTween中的工作机制的了吧  over

OTEasing的类型有:(熟悉缓动效果的肯定比较熟悉)
    Linear (默认的easeType)
    BackIn, BackOut, BackInOut
    BounceIn, BounceOut, BounceInOut
    CircIn, CircOut, CircInOut
    CubicIn CubicOut, CubicInOut
    ElasticIn, ElasticOut, ElasticInOut
    ExpoIn, ExpoOut, ExpoInOut
    QuadIn, QuadOut, QuadInOut
    QuartIn, QuartOut, QuartInOut
    QuintIn, QuintOut, QuintInOut
    SineIn, SineOut, SineInOut
    StrongIn, StrongOut, StrongInOut



  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
LSTM(长短时记忆网络)是一种常见的深度学习模型,用于处理序列数据。以下是一个简单的LSTM实现的Python源代码: ``` import numpy as np class LSTM: def __init__(self, input_size, hidden_size, output_size): self.input_size = input_size self.hidden_size = hidden_size self.output_size = output_size self.Wf = np.random.randn(hidden_size, input_size + hidden_size) self.Wi = np.random.randn(hidden_size, input_size + hidden_size) self.Wc = np.random.randn(hidden_size, input_size + hidden_size) self.Wo = np.random.randn(hidden_size, input_size + hidden_size) self.bf = np.zeros((hidden_size, 1)) self.bi = np.zeros((hidden_size, 1)) self.bc = np.zeros((hidden_size, 1)) self.bo = np.zeros((hidden_size, 1)) self.Wy = np.random.randn(output_size, hidden_size) self.by = np.zeros((output_size, 1)) def sigmoid(self, x): return 1 / (1 + np.exp(-x)) def tanh(self, x): return np.tanh(x) def forward(self, x): T = x.shape self.h = np.zeros((self.hidden_size, T)) self.c = np.zeros((self.hidden_size, T)) self.f = np.zeros((self.hidden_size, T)) self.i = np.zeros((self.hidden_size, T)) self.o = np.zeros((self.hidden_size, T)) self.y = np.zeros((self.output_size, T)) for t in range(T): xt = x[:,t].reshape(-1, 1) ft = self.sigmoid(np.dot(self.Wf, np.vstack((self.h[:,t-1], xt))) + self.bf) it = self.sigmoid(np.dot(self.Wi, np.vstack((self.h[:,t-1], xt))) + self.bi) cct = self.tanh(np.dot(self.Wc, np.vstack((self.h[:,t-1], xt))) + self.bc) ot = self.sigmoid(np.dot(self.Wo, np.vstack((self.h[:,t-1], xt))) + self.bo) self.f[:,t] = ft[:,0] self.i[:,t] = it[:,0] self.c[:,t] = ft[:,0] * self.c[:,t-1] + it[:,0] * cct[:,0] self.o[:,t] = ot[:,0] self.h[:,t] = ot[:,0] * self.tanh(self.c[:,t]) self.y[:,t] = np.dot(self.Wy, self.h[:,t]) + self.by return self.y def backward(self, x, y_true, learning_rate=0.1): T = x.shape dWy = np.zeros_like(self.Wy) dby = np.zeros_like(self.by) dh_next = np.zeros_like(self.h[:,0]).reshape(-1, 1) dc_next = np.zeros_like(self.c[:,0]).reshape(-1, 1) dWf = np.zeros_like(self.Wf) dWi = np.zeros_like(self.Wi) dWc = np.zeros_like(self.Wc) dWo = np.zeros_like(self.Wo) dbf = np.zeros_like(self.bf) dbi = np.zeros_like(self.bi) dbc = np.zeros_like(self.bc) dbo = np.zeros_like(self.bo) for t in reversed(range(T)): yt = y_true[:,t].reshape(-1, 1) dy = (self.y[:,t].reshape(-1, 1) - yt) dh = np.dot(self.Wy.T, dy) + dh_next do = dh * self.tanh(self.c[:,t]) * self.o[:,t] * (1 - self.o[:,t]) dc_bar = dh * self.o[:,t] * (1 - self.tanh(self.c[:,t])**2) + dc_next dc_next = dc_bar * self.f[:,t] df = dc_bar * self.c[:,t-1] * self.f[:,t] * (1 - self.f[:,t]) di = dc_bar * self.cct[:,0] * self.i[:,t] * (1 - self.i[:,t]) dcct = dc_bar * self.i[:,t] * (1 - self.cct[:,0]**2) dWf += df @ np.vstack((self.h[:,t-1], x[:,t])).T dWi += di @ np.vstack((self.h[:,t-1], x[:,t])).T dWc += dcct @ np.vstack((self.h[:,t-1], x[:,t])).T dWo += do @ np.vstack((self.h[:,t-1], x[:,t])).T dbf += df dbi += di dbc += dcct dbo += do dh_next = (np.dot(self.Wf.T, df) + np.dot(self.Wi.T, di) + np.dot(self.Wc.T, dcct) + np.dot(self.Wo.T, do)) for dparam in [dWf, dWi, dWc, dWo, dbf, dbi, dbc, dbo]: np.clip(dparam, -5, 5, out=dparam) for param, dparam in zip([self.Wf, self.Wi, self.Wc, self.Wo, self.bf, self.bi, self.bc, self.bo, self.Wy, self.by], [dWf, dWi, dWc, dWo, dbf, dbi, dbc, dbo, dWy, dby]): param -= learning_rate * dparam def train(self, X_train, Y_train, X_valid=None, Y_valid=None, epochs=100, learning_rate=0.1): if X_valid is not None: is_valid=True else: is_valid=False for i in range(epochs): loss_train = 0 for j in range(len(X_train)): x_train = X_train[j] y_train = Y_train[j] y_pred_train = lstm.forward(x_train) lstm.backward(x_train, y_train) loss_train += ((y_pred_train - y_train)**2).mean() if is_valid: loss_valid = 0 for k in range(len(X_valid)): x_valid = X_valid[k] y_valid = Y_valid[k] y_pred_valid = lstm.forward(x_valid) loss_valid += ((y_pred_valid - y_valid)**2).mean() print(&quot;Epoch {:3d}: Train Loss {:.4f}, Valid Loss {:.4f}&quot;.format( i+1, loss_train / len(X_train), loss_valid / len(X_valid))) else: print(&quot;Epoch {:3d}: Train Loss {:.4f}&quot;.format( i+1, loss_train / len(X_train))) if __name__ == '__main__': X_train = [np.random.randn(10).reshape(-1, 1) for _ in range(100)] Y_train = [np.random.randn(5).reshape(-1, 1) for _ in range(100)] X_valid = [np.random.randn(10).reshape(-1, 1) for _ in range(10)] Y_valid = [np.random.randn(5).reshape(-1, 1) for _ in range(10)] lstm = LSTM(input_size=10, hidden_size=32, output_size=5) lstm.train(X_train=X_train, Y_train=Y_train, X_valid=X_valid, Y_valid=Y_valid, epochs=50, learning_rate=0.01) ``` 上面的代码实现了一个基本的LSTM模型,包括前向传播和反向传播过程。如果你想使用LSTM模型,可以根据自己的需求对这个代码进行修改和扩展。同时,需要注意的是,这只是一个简单的实现示例,实际使用中可能需要更复杂的结构和技巧来提高模型的性能。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值