前提条件:
1、有Linux环境,这里使用Ubuntu(云服务器除外)
2、准备一个没有重要数据的U盘(初学者可能有操作失误导致U盘数据丢失的风险)
步骤:
1、未插入U盘前,查看磁盘分区状况
hadoop@node1:~$ sudo fdisk -l
[sudo] password for hadoop:
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF6DA26E-694F-4F26-BA84-0C38038BDD86
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 2549759 1499136 732M Linux filesystem
/dev/sda3 2549760 104855551 102305792 48.8G Linux LVM
Disk /dev/mapper/node1--vg-root: 47.8 GiB, 51350863872 bytes, 100294656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/node1--vg-swap_1: 980 MiB, 1027604480 bytes, 2007040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
2、插入U盘后的变化,
hadoop@node1:~$ sudo fdisk -l
Disk /dev/sda: 50 GiB, 53687091200 bytes, 104857600 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: gpt
Disk identifier: FF6DA26E-694F-4F26-BA84-0C38038BDD86
Device Start End Sectors Size Type
/dev/sda1 2048 1050623 1048576 512M EFI System
/dev/sda2 1050624 2549759 1499136 732M Linux filesystem
/dev/sda3 2549760 104855551 102305792 48.8G Linux LVM
Disk /dev/mapper/node1--vg-root: 47.8 GiB, 51350863872 bytes, 100294656 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/mapper/node1--vg-swap_1: 980 MiB, 1027604480 bytes, 2007040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk /dev/sdb: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x072097e2
Device Boot Start End Sectors Size Id Type
/dev/sdb1 * 2048 62914559 62912512 30G 7 HPFS/NTFS/exFAT
红字为变化部分,看到多了了Disk /dev/sdb 共30个G部分
Disk /dev/sdb由1个设备组成,设备名称/dev/sdb1
3、新建一个挂载点(盘符),例如:”usb”
hadoop@node1:~$ cd /mnt/
hadoop@node1:/mnt$ sudo mkdir usb
hadoop@node1:/mnt$ cd usb/
通过ls命令查看,挂载点目录是空的,没有内容,因为我们的U盘还没有挂载过来。
hadoop@node1:/mnt/usb$ ls
hadoop@node1:/mnt/usb$
4、挂载
hadoop@node1:/mnt$ sudo mount /dev/sdb1 /mnt/usb/
hadoop@node1:/mnt$ cd usb/
hadoop@node1:/mnt/usb$ ls
Spark大数据技术与应用-说课.pptx
能查看到内容了,说明挂载成功,
中途小插曲(感兴趣的看看,不感兴趣可以跳过,提醒这里千万不要操作。)
=================小插曲开始=====================
运行失败如下,
hadoop@node1:/mnt$ sudo mount -t vfat /dev/sdb1 /mnt/usb/
mount: wrong fs type, bad option, bad superblock on /dev/sdb1,
missing codepage or helper program, or other error
In some cases useful info is found in syslog - try
dmesg | tail or so.
参考
https://blog.csdn.net/gddxz_zhouhao/article/details/53169676
运行命令fsck -t ext4 /dev/sdb1
hadoop@node1:/mnt$ sudo rm -rf usb
rm: cannot remove 'usb': Device or resource busy
虽然提示busy,但数据已经被删除。。。。
将U盘里的东西都清空了,所以加载U盘千万小心,里面的数据注意备份,也不要随意执行rm命令。
=================小插曲结束=====================
5、卸载:sudo umount /mnt/usb 或者 sudo umount /dev/sdb1
hadoop@node1:/mnt/usb$ sudo umount /mnt/usb
umount: /mnt/usb: target is busy
(In some cases useful info about processes that
use the device is found by lsof(8) or fuser(1).)
注意要退出加载的目录,才能卸载
放回上一级目录
hadoop@node1:/mnt/usb$ cd ..
hadoop@node1:/mnt$ sudo umount /mnt/usb
hadoop@node1:/mnt$ ls usb/
注意:Linux不支持挂载NTFS格式。
完成! enjoy it!