New term new start!

I've been a sophomore and had my own major in CES from now on!

The next thing I'm going to do i s to write down and suumarize what I've learned in class about CS once a week . Sometimes I may report some new knowledge learned by myself or problems I have.

Doing this just for supervising myself to keep studying CS regularly.If you skim over  my  passage unwittingly,you can comment or  criticse any unproper detail you find which will help me a lot to make progress.Thanks!

$1function

.......

1.1recursive function(refer to my last passage)

Here's a Hanoi problem,it's classic.

#include<iostream>
using namespace std;
void move(char x,char y)
{
cout<<x<<" --> "<<y<<endl;
}
void Hanoi(int n,char x,char y,char z)                            //move x to z by y
{
if(n==1)
move(x,z);
else
{
Hanoi(n-1,x,z,y);
move(x,z);
Hanoi(n-1,y,x,z);
}
}
int main()
{
int n;
cin>>n;
Hanoi(n,'A','B','C');
return 0;
}


1.2function overload:the functions have same name but the number of parameter or when the number is the same the type or showing order must be different.

1.3default parameter:must start from the right side.

1.4inline function

1.5function template:for all the same steps but the different  type of parameters.

Format:template<typename+ name of type>  return type+ name  of function template(list of parameter)

Example:

#include<iostream>
using namespace std;
template <typename T> T Max(T a,T b)              //declaration
{
return (a>b)?a:b;
}

char *Max(char *a,char *b)                                   //overload the function template to compare the content of string rather than the first address.
{
return strcmp(a,b)>=0?a:b;
}
int main()
{
int a=3,b=5;
char c='3',d='5';
double x=3.5,y=5.5;
char s[10]="abcd",t[10]="ABCD";
cout<<Max(a,b)<<endl;                                 //automaticly define
cout<<Max(c,d)<<endl;
cout<<Max(x,y)<<endl;
cout<<Max(s,t)<<endl;
return 0;
}





  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
代码time_start = time.time() results = list() iterations = 2001 lr = 1e-2 model = func_critic_model(input_shape=(None, train_img.shape[1]), act_func='relu') loss_func = tf.keras.losses.MeanSquaredError() alg = "gd" # alg = "gd" for kk in range(iterations): with tf.GradientTape() as tape: predict_label = model(train_img) loss_val = loss_func(predict_label, train_lbl) grads = tape.gradient(loss_val, model.trainable_variables) overall_grad = tf.concat([tf.reshape(grad, -1) for grad in grads], 0) overall_model = tf.concat([tf.reshape(weight, -1) for weight in model.weights], 0) overall_grad = overall_grad + 0.001 * overall_model ## adding a regularization term results.append(loss_val.numpy()) if alg == 'gd': overall_model -= lr * overall_grad ### gradient descent elif alg == 'gdn': ## gradient descent with nestrov's momentum overall_vv_new = overall_model - lr * overall_grad overall_model = (1 + gamma) * oerall_vv_new - gamma * overall_vv overall_vv = overall_new pass model_start = 0 for idx, weight in enumerate(model.weights): model_end = model_start + tf.size(weight) weight.assign(tf.reshape()) for grad, ww in zip(grads, model.weights): ww.assign(ww - lr * grad) if kk % 100 == 0: print(f"Iter: {kk}, loss: {loss_val:.3f}, Duration: {time.time() - time_start:.3f} sec...") input_shape = train_img.shape[1] - 1 model = tf.keras.Sequential([ tf.keras.layers.Input(shape=(input_shape,)), tf.keras.layers.Dense(30, activation="relu"), tf.keras.layers.Dense(20, activation="relu"), tf.keras.layers.Dense(1) ]) n_epochs = 20 batch_size = 100 learning_rate = 0.01 momentum = 0.9 sgd_optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=momentum) model.compile(loss="mean_squared_error", optimizer=sgd_optimizer) history = model.fit(train_img, train_lbl, epochs=n_epochs, batch_size=batch_size, validation_data=(test_img, test_lbl)) nag_optimizer = tf.keras.optimizers.SGD(learning_rate=learning_rate, momentum=momentum, nesterov=True) model.compile(loss="mean_squared_error", optimizer=nag_optimizer) history = model.fit(train_img, train_lbl, epochs=n_epochs, batch_size=batch_size, validation_data=(test_img, test_lbl))运行后报错TypeError: Missing required positional argument,如何改正
05-22
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值