self.array 和 _array的区别

</pre><p>今天早上遇到一个问题,在给数组array添加元素的时候,发现如下情况</p><p>在有懒加载的前提下:</p><p><span style="color:rgb(51,51,51); font-family:'Helvetica Neue',Helvetica,STheiti,微软雅黑,黑体,Arial,Tahoma,sans-serif,serif; font-size:14px; line-height:24px"></span></p><pre name="code" class="objc">[self.array addObject: @"1"];  
self.array可以通过这个方法添加对象成功,
[_array addObject: @"1"]  
  而_array添加对象不成功?


通过查找资料,发现self.array 和 _array的访问的区别:

1.内存机制的不同

self.array相当于〔self getArray〕,_arr相当于self->_array。一个是访问属性,一个是访问成员变量。

使用self.array时会执行get方法,也就是(NSArray * )array{}这个方法。

如果在懒加载的时候,最后返回的是return self.array;这样会进入一个死循环

相当于

 - (NSarray *)array{ 
                 ........ 
                 return [self array];  
                  } 
注意: 如果没有重写(NSArray * )array{},self.array和_array效果一样。、


2.KVO的区别

self.array 会激活通知中心 。
_array 就不会 。


建议:如果没有使用懒加载,直接初始化的话,两者在添加元素方面是一样的,但是直接访问不会触发set和get方法,个人建议是除了在set和get方法里使用下划线直接访问变量,外面都使用self.来调用 


下面的这段python代码,哪里有错误,修改一下:import numpy as np import matplotlib.pyplot as plt import pandas as pd import torch import torch.nn as nn from torch.autograd import Variable from sklearn.preprocessing import MinMaxScaler training_set = pd.read_csv('CX2-36_1971.csv') training_set = training_set.iloc[:, 1:2].values def sliding_windows(data, seq_length): x = [] y = [] for i in range(len(data) - seq_length): _x = data[i:(i + seq_length)] _y = data[i + seq_length] x.append(_x) y.append(_y) return np.array(x), np.array(y) sc = MinMaxScaler() training_data = sc.fit_transform(training_set) seq_length = 1 x, y = sliding_windows(training_data, seq_length) train_size = int(len(y) * 0.8) test_size = len(y) - train_size dataX = Variable(torch.Tensor(np.array(x))) dataY = Variable(torch.Tensor(np.array(y))) trainX = Variable(torch.Tensor(np.array(x[1:train_size]))) trainY = Variable(torch.Tensor(np.array(y[1:train_size]))) testX = Variable(torch.Tensor(np.array(x[train_size:len(x)]))) testY = Variable(torch.Tensor(np.array(y[train_size:len(y)]))) class LSTM(nn.Module): def __init__(self, num_classes, input_size, hidden_size, num_layers): super(LSTM, self).__init__() self.num_classes = num_classes self.num_layers = num_layers self.input_size = input_size self.hidden_size = hidden_size self.seq_length = seq_length self.lstm = nn.LSTM(input_size=input_size, hidden_size=hidden_size, num_layers=num_layers, batch_first=True) self.fc = nn.Linear(hidden_size, num_classes) def forward(self, x): h_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)) c_0 = Variable(torch.zeros( self.num_layers, x.size(0), self.hidden_size)) # Propagate input through LSTM ula, (h_out, _) = self.lstm(x, (h_0, c_0)) h_out = h_out.view(-1, self.hidden_size) out = self.fc(h_out) return out num_epochs = 2000 learning_rate = 0.001 input_size = 1 hidden_size = 2 num_layers = 1 num_classes = 1 lstm = LSTM(num_classes, input_size, hidden_size, num_layers) criterion = torch.nn.MSELoss() # mean-squared error for regression optimizer = torch.optim.Adam(lstm.parameters(), lr=learning_rate) # optimizer = torch.optim.SGD(lstm.parameters(), lr=learning_rate) runn = 10 Y_predict = np.zeros((runn, len(dataY))) # Train the model for i in range(runn): print('Run: ' + str(i + 1)) for epoch in range(num_epochs): outputs = lstm(trainX) optimizer.zero_grad() # obtain the loss function loss = criterion(outputs, trainY) loss.backward() optimizer.step() if epoch % 100 == 0: print("Epoch: %d, loss: %1.5f" % (epoch, loss.item())) lstm.eval() train_predict = lstm(dataX) data_predict = train_predict.data.numpy() dataY_plot = dataY.data.numpy() data_predict = sc.inverse_transform(data_predict) dataY_plot = sc.inverse_transform(dataY_plot) Y_predict[i,:] = np.transpose(np.array(data_predict)) Y_Predict = np.mean(np.array(Y_predict)) Y_Predict_T = np.transpose(np.array(Y_Predict))
05-27
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值