linux新M2固态挂载

文章介绍了在Linux环境下进行硬盘挂载、文件系统管理、永久挂载的方法,以及使用tar进行打包和解压,还涉及了芯片中的ALU概念。此外,还展示了如何从HuggingFace下载预训练模型并进行转换的过程。
摘要由CSDN通过智能技术生成

一、普通挂载

查看硬盘信息

sudo fdisk -l

创建文件系统

sudo mkfs.ext4 /dev/nvme0n1

创建挂载点

sudo mkdir /home/zain

挂载

sudo mount /dev/nvme0n1 /home/zain

二、永久挂载

vi /etc/fstab

insert:
/dev/nvme0n1   /home/zain   ext4   defaults   0   2
wq

sudo mount -a

三、Linux打包

Linux打包整个文件

tar -cvf zain.tar /home/zain
create
filename
-z 用gzip压缩

解压

tar -xzvf zain.tar.gz

四、芯片

ALU
算术逻辑单元(加减乘除、与或非、位运算、移位、条件)

五、Linux文件互传

scp /home/zain.zip root@192.168.6.111:/home/zain
#!/bin/bash
source /usr/local/Ascend/ascend-toolkit/set_env.sh
source /home/ma-user/.bashrc
mkdir llama2-13b

wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/config.json
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/generation_config.json
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/pytorch_model.bin.index.json
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/special_tokens_map.json
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/tokenizer_config.json
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/train-00000-of-00001-a09b74b3ef9c3b56.parquet
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/pytorch_model-00001-of-00003.bin
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/pytorch_model-00002-of-00003.bin
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/pytorch_model-00003-of-00003.bin
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/tokenizer.model
wget -P  ./llama2-13b https://obs-whaicc-fae-public.obs.cn-central-221.ovaijisuan.com/checkpoint/huggingface/LLaMA/LLaMA2-Chinese-13b-Chat/pretrain_llama2_13B_ptd_8p.sh

python tools/ckpt_convert/llama/convert_weights_from_huggingface.py --input-model-dir /home/ma-user/modelarts/user-job-dir/AscendSpeed/llama2-13b \
                                                                    --output-model-dir /home/ma-user/modelarts/user-job-dir/AscendSpeed/ckpt \
                                                                    --tensor-model-parallel-size 1 \
                                                                    --pipeline-model-parallel-size 1 \
                                                                    --type 13B \
                                                                    --deepspeed

mkdir dataset_llama2

cp ./llama2-13b/train-00000-of-00001-a09b74b3ef9c3b56.parquet ./dataset_llama2 -rf 

mkdir alpaca_preprocessed

python tools/preprocess_data.py --input /home/ma-user/modelarts/user-job-dir/AscendSpeed/dataset_llama2/train-00000-of-00001-a09b74b3ef9c3b56.parquet \
                                --output-prefix /home/ma-user/modelarts/user-job-dir/AscendSpeed/alpaca_preprocessed/alpaca \
                                --tokenizer-type PretrainedFromHF \
                                --tokenizer-name-or-path llama2-13b \
                                --tokenizer-not-use-fast \
                                --handler-name GeneralInstructionHandler
echo "test_ls_1"
echo "test_ls_1"
ls 
echo "test_ls_1"
echo "test_ls_1"

echo "test_ls"
echo "test_ls"
ls /home/ma-user/modelarts/user-job-dir/AscendSpeed/alpaca_preprocessed

echo "test_ls"
echo "test_ls"
mv ./llama2-13b/pretrain_llama2_13B_ptd_8p.sh examples/llama2/ -f

bash examples/llama2/pretrain_llama2_13B_ptd_8p.sh

  • 10
    点赞
  • 6
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值