AlphaFold3 data_modules 模块的 OpenFoldDataset
类的 looped_samples 方法用于 循环采样数据,确保数据能被不断地提供,适用于 PyTorch 的 DataLoader
在训练过程中迭代读取数据。dataset_idx
指定了当前要处理的数据集(即 self.datasets[dataset_idx]
)
源代码:
def looped_samples(self, dataset_idx):
max_cache_len = int(self.epoch_len * self.probabilities[dataset_idx])
dataset = self.datasets[dataset_idx]
idx_iter = self.looped_shuffled_dataset_idx(len(dataset))
chain_data_cache = dataset.chain_data_cache
while True:
weights = []
idx = []
for _ in range(max_cache_len):
candidate_idx = next(idx_iter)
chain_id = dataset.idx_to_chain_id(candidate_idx)
chain_data_cache_entry = chain_data_cache[chain_id]
if not self.deterministic_train_filter(chain_data_cache_entry):
continue
p = self.get_stochastic_train_filter_prob(
chain_data_cache_entry,
)
weights.append([1. - p, p])
idx.append(candidate_idx)
samples = torch.multinomial(
torch.tensor(weights),
num_samples=1,
generator=self.generator,
)
samples = samples.squeeze()
cache = [i for i, s in zip(idx, samples) if s]
for datapoint_idx in cache:
yield datapoint_idx
源码解读:
max_cache_len = int(self.epoch_len * self.probabilities[dataset_idx])
-
epoch_len
是一个训练周期(epoch)中期望的样本总数。 -
self.probabilitie