数据、评估标准见NLB2021
https://neurallatents.github.io/
以下代码依据
https://github.com/trungle93/STNDT
原代码使用了 Ray+Config文件进行了参数搜索,库依赖较多,数据流过程不明显,代码冗杂,这里进行了抽丝剥茧,将其中最核心的部分提取出来。
1.数据处理部分
1.1 下载数据集
需要依赖 pip install dandi
downald.py
root = "D:/NeuralLatent/"
def downald_data():
from dandi.download import download
download("https://dandiarchive.org/dandiset/000128", root)
download("https://dandiarchive.org/dandiset/000138", root)
download("https://dandiarchive.org/dandiset/000139", root)
download("https://dandiarchive.org/dandiset/000140", root)
download("https://dandiarchive.org/dandiset/000129", root)
download("https://dandiarchive.org/dandiset/000127", root)
download("https://dandiarchive.org/dandiset/000130", root)
1.2 数据集预处理
需要依赖官方工具包pip install nlb_tools
主要是加载锋值序列数据,将其采样为5ms的时间槽
preprocess.py
## 以下为参数示例
# data_path = root + "/000129/sub-Indy/"
# dataset_name = "mc_rtt"
## 注意 "./data" 必须提前创建好
from nlb_tools.make_tensors import make_train_input_tensors, make_eval_input_tensors, combine_h5
def preprocess(data_path, dataset_name=None):
dataset = NWBDataset(datapath)
bin_width = 5
dataset.resample(bin_width)
make_train_input_tensors(
dataset, dataset_name=dataset_name, trial_split="train",
include_behavior=True, include_forward_pred=True, save_file=True,
save_path=f"./data/{
dataset_name}_train.h5"
)
make_eval_input_tensors(
dataset, dataset_name=dataset_name, trial_split="val",
save_file=True, save_path=f"./data/