Part-Three 类与对象

1.时钟类的完整程序

#include<iostream>
using namespace std;
class Clock{
    public :
        void setTime(int newH=0,int newM=0,int newS=0);//设置函数的默认值,注意!在此处声明时设定后外部实现时就不用设置了 
        void showTime();
    private:
        int hour,minute,second;
};

//时钟类的具体实现
void Clock::setTime(int newH,int newM,int newS)
{
    hour=newH;
    minute=newM;
    second=newS;
 } 
 
 void Clock::showTime()
 {
     cout<<hour<<":"<<minute<<":"<<second<<endl;
 }

//主函数 
int main()
{
    Clock myClock;//定义对象
    cout<<"First time set and output:"<<endl;
    myClock.setTime();//设置时间为默认时间
    myClock.showTime();
    cout<<"Second time set and output:"<<endl;
    myClock.setTime(8,30,30);
    myClock.showTime(); 
     
    return 0;
 } 

 

2.真是……本来想给clock类加上构造函数和复制构造函数还有析构函数,结果,在构造函数这块出了问题,call of overloaded Clock is ambigious,说是Clock这个函数有二义性,导致无法重载

下面看代码:

#include<iostream>
using namespace std;
class Clock{
    public :
        void setTime(int newH=0,int newM=0,int newS=0);//设置函数的默认值,注意!在此处声明时设定后外部实现时就不用设置了 
        void showTime();
        Clock(int newH=0,int newM=0,int newS=0);//含参构造函数
        Clock(); //无参构造函数 
        Clock(Clock& clockCopy);//复制构造函数
        ~Clock(){}//内置的析构函数 
    private:
        int hour,minute,second;
};

//时钟类的具体实现
Clock::Clock(int newH,int newM,int newS)
{
    hour=newH;
    minute=newM;
    second=newS;
 } 
 
Clock::Clock()
{
    hour=0;
    minute=0;
    second=0;
 } 
 
Clock::Clock(Clock& clockCopy)//复制构造函数是把传过来的对象的属性赋值给当前对象 
{
    hour=clockCopy.hour;
    minute=clockCopy.minute;
    second=clockCopy.second; 
 } 
void Clock::setTime(int newH,int newM,int newS)
{
    hour=newH;
    minute=newM;
    second=newS;
 } 
 
 void Clock::showTime()
 {
     cout<<hour<<":"<<minute<<":"<<second<<endl;
 }

//主函数 
int main()
{
    
    Clock myClock;//定义对象
    cout<<"First time set and output:"<<endl;
    myClock.setTime();//设置时间为默认时间
    myClock.showTime();
    cout<<"Second time set and output:"<<endl;
    myClock.setTime(8,30,30);
    myClock.showTime(); 
     
    return 0;
 } 

 

主要是这块:

当我把含参构造函数的参数都设置上默认值的时候,系统就会允许在调用这个函数时可以不加参数,这就跟无参构造函数冲突了

想要避免这种情况的话,可以把默认值含参构造函数的默认值去掉,改成Clock(int newH,int newM,int newS); 的形式

如果就是非得想给它设一个默认值且不想跟无参构造函数冲突的话,我个人给你出个法子,你可以加一个没用的变量,当然,你要保证这个没用的变量不会跟其他单参数构造哈三年冲突;Clock(int nouse,int newH=0,int newM=0,int newS=0);//虽然传进去一个变量,但是你函数实现时不用它就行了;    当然这只是针对某些情况下这样去做,一般没必要,值不当

3.

4.

5.

6.

7.

8.

9.

10.

11.

12.

 

转载于:https://www.cnblogs.com/wildness-priest/p/10714330.html

A. Encoding Network of PFSPNet The encoding network is divided into three parts. In the part I, RNN is adopted to model the processing time pij of job i on all machines, which can be converted into a fixed dimensional vector pi. In the part II, the number of machines m is integrated into the vector pi through the fully connected layer, and the fixed dimensional vector p˜i is output. In the part III, p˜i is fed into the convolution layer to improve the expression ability of the network, and the final output η p= [ η p1, η p2,..., η pn] is obtained. Fig. 2 illustrates the encoding network. In the part I, the modelling process for pij is described as follows, where WB, hij , h0 are k-dimensional vectors, h0, U, W, b and WB are the network parameters, and f() is the mapping from RNN input to hidden layer output. The main steps of the part I are shown as follows. Step 1: Input pij to the embedding layer and then obtain the output yij = WB pij ; Step 2: Input yi1 and h0 to the RNN and then obtain the hidden layer output hi1 = f(yi1,h0; U,W, b). Let p1 = h1m ; Step 3: Input yij and hi,j−1, j = 2, 3 ··· , m into RNN in turn, and then obtain the hidden layer output hij = f(yij ,hi,j−1; U,W, b), j = 2, 3 ··· , m. Let pi = him . In the part II, the number of machines m and the vector pi are integrated by the fully connected layer. The details are described as follows. WB and h˜i are d-dimensional vectors, WB W and ˜b are network parameters, and g() denotes the mapping from the input to the output of full connection layer. Step 1: Input the number of machines m to the embedding layer, and the output m = WB m is obtained。Step 2: Input m and pi to the fully connected layer and then obtain the output hi = g([m, pi];W, b); Step 3: Let pi = Relu(hi). In the part III, pi, i = 1, 2,...,n are input into onedimensional convolution layer. The final output vector η pi, i = 1, 2, ··· , n are obtained after the output of convolutional layer goes through the Relu layer.首先逐行仔细的分析此过程,其次怎么使用pytorch用EncoderNetwork类完全实现这个过程的所有功能和步骤
06-07
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值