DNABERT代码运行

Linux系统问题

  1. Ubuntu报错:“E:无法定位软件包”

解决:更新软件源

sudo apt-get update

  1. 无法获得锁

解决:

方法一:使用ps aux命令,直接kill掉锁死的进程

方法二:强制解锁

把进程锁的缓存文件删除

sudo rm /var/cache/apt/archives/lock
sudo rm /var/lib/dpkg/lock

  1. VMware Tools无法在Windows和Linux系统间复制粘贴文件

解决:

(1)卸载VMwareTools

sudo apt-get autoremove open-vm-tools

(2)联网安装到桌面

sudo apt-get install open-vm-tools-desktop

(3)重启

  1. 安装包

tensorboardX

tensorboard

scikit-learn >= 0.22.2

seqeval

pyahocorasick

scipy

statsmodels

biopython

pandas

pybedtools

sentencepiece==0.1.91

报错:zlib.h:没有那个文件或目录

解决:安装zlib包

sudo apt-get install zlib1g-dev

报错:

ERROR: Directory './' is not installable. Neither 'setup.py' nor 'pyproject.toml' found.

apex安装未成功

报错:

03/23/2023 23:05:15 - WARNING - __main__ -   Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False
Traceback (most recent call last):
  File "/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/configuration_utils.py", line 225, in get_config_dict
    raise EnvironmentError
OSError

During handling of the above exception, another exception occurred:

Traceback (most recent call last):
  File "run_pretrain.py", line 885, in <module>
    main()
  File "run_pretrain.py", line 781, in main
    config = config_class.from_pretrained(args.config_name, cache_dir=args.cache_dir)
  File "/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/configuration_utils.py", line 176, in from_pretrained
    config_dict, kwargs = cls.get_config_dict(pretrained_model_name_or_path, **kwargs)
  File "/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/configuration_utils.py", line 241, in get_config_dict
    raise EnvironmentError(msg)
OSError: Model name 'PATH_TO_DNABERT_REPO/src/transformers/dnabert-config/bert-config-6/config.json' was not found in model name list. We assumed 'https://s3.amazonaws.com/models.huggingface.co/bert/PATH_TO_DNABERT_REPO/src/transformers/dnabert-config/bert-config-6/config.json/config.json' was a path, a model identifier, or url to a configuration file named config.json or a directory containing such a file but couldn't find any such file at this path or url.

解决:

更改路径

    --config_name=/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/dnabert-config/bert-config-$KMER/config.json \
    --output_dir $OUTPUT_PATH \
    --model_type=dna \
    --tokenizer_name=dna$KMER \
    --config_name=/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/dnabert-config/bert-config-$KMER/config.json \
    --do_train \
    --train_data_file=$TRAIN_FILE \
    --do_eval \
    --eval_data_file=$TEST_FILE \
    --mlm \
    --gradient_accumulation_steps 25 \
    --per_gpu_train_batch_size 10 \
    --per_gpu_eval_batch_size 6 \
    --save_steps 500 \
    --save_total_limit 20 \
    --max_steps 200000 \
    --evaluate_during_training \
    --logging_steps 500 \
    --line_by_line \
    --learning_rate 4e-4 \
    --block_size 512 \
    --adam_epsilon 1e-6 \
    --weight_decay 0.01 \
    --beta1 0.9 \
    --beta2 0.98 \
    --mlm_probability 0.025 \
    --warmup_steps 10000 \
    --overwrite_output_dir \
    --n_process 24

报错:

03/23/2023 23:24:11 - WARNING - __main__ -   Process rank: -1, device: cpu, n_gpu: 0, distributed training: False, 16-bits training: False
03/23/2023 23:24:11 - INFO - transformers.configuration_utils -   loading configuration file /home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/dnabert-config/bert-config-6/config.json
03/23/2023 23:24:11 - INFO - transformers.configuration_utils -   Model config BertConfig {
  "architectures": [
    "BertForMaskedLM"
  ],
  "attention_probs_dropout_prob": 0.1,
  "bos_token_id": 0,
  "do_sample": false,
  "eos_token_ids": 0,
  "finetuning_task": null,
  "hidden_act": "gelu",
  "hidden_dropout_prob": 0.1,
  "hidden_size": 768,
  "id2label": {
    "0": "LABEL_0",
    "1": "LABEL_1"
  },
  "initializer_range": 0.02,
  "intermediate_size": 3072,
  "is_decoder": false,
  "label2id": {
    "LABEL_0": 0,
    "LABEL_1": 1
  },
  "layer_norm_eps": 1e-12,
  "length_penalty": 1.0,
  "max_length": 20,
  "max_position_embeddings": 512,
  "model_type": "bert",
  "num_attention_heads": 12,
  "num_beams": 1,
  "num_hidden_layers": 12,
  "num_labels": 2,
  "num_return_sequences": 1,
  "num_rnn_layer": 1,
  "output_attentions": false,
  "output_hidden_states": false,
  "output_past": true,
  "pad_token_id": 0,
  "pruned_heads": {},
  "repetition_penalty": 1.0,
  "rnn": "lstm",
  "rnn_dropout": 0.0,
  "rnn_hidden": 768,
  "split": 10,
  "temperature": 1.0,
  "top_k": 50,
  "top_p": 1.0,
  "torchscript": false,
  "type_vocab_size": 2,
  "use_bfloat16": false,
  "vocab_size": 4101
}

============================================================
<class 'transformers.tokenization_dna.DNATokenizer'>
Traceback (most recent call last):
  File "run_pretrain.py", line 885, in <module>
    main()
  File "run_pretrain.py", line 789, in main
    tokenizer = tokenizer_class.from_pretrained(args.tokenizer_name, cache_dir=args.cache_dir)
  File "/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/tokenization_utils.py", line 377, in from_pretrained
    return cls._from_pretrained(*inputs, **kwargs)
  File "/home/zyp/project/DNABERT-master/DNABERT-master/src/transformers/tokenization_utils.py", line 479, in _from_pretrained
    list(cls.vocab_files_names.values()),
OSError: Model name 'dna6' was not found in tokenizers model name list (dna3, dna4, dna5, dna6). We assumed 'dna6' was a path, a model identifier, or url to a directory containing vocabulary files named ['vocab.txt'] but couldn't find such vocabulary files at this path or url.
OSError:在tokenizers模型名称列表(dna3, dna4, dna5, dna6)中没有找到模型名称“dna6”。我们假设'dna6'是一个路径、一个模型标识符或指向一个目录的url,该目录包含名为['vocab.txt']的词汇表文件,但在这个路径或url中找不到这样的词汇表文件。

解决:

(1)查看transformers版本号:

pip list

(2)更新transformers

pip install --upgrade transformers     //有点慢,应该挂镜像

报错消失

产生新的报错:

ImportError: cannot import name 'DNATokenizer'

大概率路径问题

Ubuntu崩了,开机卡在 started gnome display manager,无法开机

参考:https://blog.csdn.net/qq_42680785/article/details/116195840

原因:磁盘空间已满或更新异常

解决:

(1)alt+ctrl+F6(18.04版本Ubuntu)

(2)检查磁盘空间

df -h

文件系统 /dev/loopx 已用100%

(3)清理磁盘空间

sudo apt autoremove --purge snapd

(4)重启

(5)清理APT缓存

du -sh /var/cache/apt/archives
sudo apt-get clean
  • 0
    点赞
  • 3
    收藏
    觉得还不错? 一键收藏
  • 6
    评论
封装一个obs-studio的demo需要一些基础的了解和技能。以下是一个简单的实操封装步骤: 1. 安装obs-studio:首先需要确保电脑上已经成功安装了obs-studio软件。可以从该软件的官方网站上下载并进行安装。 2. 创建一个Demo项目:在任意的开发环境中创建一个新项目,例如使用C++或者Python语言。 3. 导入obs-studio库:根据所选择的语言,导入obs-studio的相关库文件。这些库文件通常可以在obs-studio软件安装目录中找到。 4. 配置obs-studio:在项目中配置obs-studio的一些基本参数,例如设置录制/直播的分辨率、编码参数、声音等。 5. 初始化obs-studio:调用obs-studio库中的初始化函数,以确保obs-studio正常启动。这些函数通常包括创建obs-studio实例、初始化视频和音频流等。 6. 开始录制/直播:通过调用obs-studio的相应函数,启动录制/直播功能。例如,可以调用obs_studio库中的"StartRecording"函数开始录制。 7. 结束录制/直播:当录制或直播结束时,调用obs-studio的相应函数停止录制/直播。例如,可以调用obs-studio库中的"StopRecording"函数停止录制。 8. 销毁obs-studio实例:在项目结束时,调用obs-studio的销毁函数,销毁obs-studio的实例,以释放资源。 以上是一个简单的obs-studio二次封装的demo实操步骤。具体的实现还需要根据所选的编程语言和开发环境进行具体的调整。希望这些步骤可以帮助你开始封装一个obs-studio的demo。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论 6
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值