OpenSolaris的文件系统分配

Another question, is this filesystem allocation ok? Ive heard something weird about Solaris /tmp that shares it with swap partition or something? Are they both the same? swap and /tmp???

I am installing Solaris to a 250GB drive. First I thought of making one single partition for Solaris and one for windows XP. I thought of using around 100GB for Solaris and the rest for Windows. How about this setup? Generally I want a file system that gets frequently to, as a separate partition (because of fragmentation). If a file system never gets written to, it is ok to put all them together on one slice.

 

Any suggestions? Is /tmp too small? Im a Unix noob and dont really know what sizes are appropriate or typical for a Unix installation. Are there any other file systems listed that gets frequently modified?

 

 

/var - 3GB (log files, writes here frequently?)

 

/tmp - 3GB (unsure of this, but frequently written? is it same as swap?)

 

/ - 30GB (I maybe want to install Brandz and Linux, and Linux software on top, like matlab, etc. Also I will install Wine and Windows stuff on top, like Diablo2, Starcraft, etc.)

 

/home - 50GB (downloading big files, when installing via CBE package installer it is good with a dedicated CBE user that downloads binary packages and compiles them in /home/cbe, and finally installs to /)

 

/swap - 2GB (have 1GB RAM now, plan to upgrade to Quad core and 4GB RAM later - heard that swap should be twice as large as RAM, but that recommendation doesnt hold anymore?)

 

 

 

Then I will have two FAT32 partition at 32.768MB (the largest possible FAT32 size without using unconventional tricks) for sharing files with Windows XP. If Solaris needs more space I can convert one FAT32 to UFS if the need should arise.

 

The rest of the disc will be Windows XP system disc - 30GB and NTFS for the left over

 

 

/tmp on Solaris uses tmpfs, a pseudo filesystem which is on the swap slice, usually s1.

 

Any time one writes anything to /tmp, one is actually using one's virtual memory (swap + RAM).

 

UNIX file systems do intelligent allocation during writes resulting in fragmentation of 0.1 - 0.2%, even over a very long period of use, so fragmentation is mostly an unknown issue on UNIX systems.

 

Consequently, there are no "disk defragmenters" on UNIX because they are not needed.

 

Your swap slice should be 1GB at a minimum, if you have less RAM than 1GB.

 

If you have more than 1GB of RAM, then the following formula works well:

 

swap = sizeof(RAM + 64MB)

 

In principle you need at least as much swap space as you have physical RAM, so that if the system crashes, it can dump all of his memory into the swap slice. Otherwise, it can double-panic, and that can turn out to be very, very ugly.

 

swap allocation in general depends on what the system will be used for. For a desktop or an infrastructure server, the above formula works well. For a system that will do lots of finite element analysis and work with models which are in gigabytes and 1 million degrees of freedom with 100 million elements, swap slice might need to be many, many times the physical RAM.

 

I recommend you allocate the following slices in your Solaris partition:

 

s0: / sizeof(s2 - (s1 + s7))

s1: swap sizeof(RAM + 64MB)

s7: unassigned 64MB (for metadb)

 

That's it. No /var, /usr and especially no /home. /home is reserved for the AutoMounter facility. If you want a separate home directory slice, use /export/home and size it to whatever you think you will need, but I advise you against doing that, because you will have used the disk space inefficiently.

 

 

File Allocation Table ?

 

Is Microsoft still pushing that?  I thought that Microsoft took off with

HPFS ( High Performance File System ) from OS/2 and then released a thing

called NTFS.  I don't think that Microsoft ever published the inner details

of NTFS did they?

 

In any case .. you don't need to worry about such things with UFS. You need

to worry about other things like blocks size and fragmentation size and

inode density as well as possible alternate sectors per cylinder and the

maximum number of logical blocks belonging to one file.  So, as an example

of a UFS filesystem that is currently up and running here :

 

# mkfs -m /dev/rdsk/c0t1d0s5

mkfs -F ufs -o nsect=228,ntrack=10,bsize=4096,fragsize=1024,

cgsize=8,free=3,rps=120,nbpi=1024,opt=t,apc=2,gap=0,nrpos=8,maxcontig=1

/dev/rdsk/c0t1d0s5 6030600

 

so .. a different set of issues at play.  For the most part you will never

see those details.  I hope.  Also, I fail to see the benefits tweaking those

UFS parameters anymore and so I hope you never need to see them.  With HPFS

and NTFS you never saw the details and couldn't.  With UFS and ZFS I hope

that you won't care to see the details and you just use them.

 

Think ZFS as your future and you will be fine. It has all the features that

a user could ever want in a filesystem and it feels like a SAN half the

time.

 

 

It's an age old argument. Considering that a single user can bring a system to a halt just by filling up /tmp, it's a pretty moot point.

 

One really should have a monitoring infrastructure in place with thresholds set, sending an alert to the system owner when the threshold is reached. There is simply no replacement for that.

 

Also, modern PCs have disks of 500-750GB. Those aren't easily filled and a desktop user will know if they fill up their own FS. Sysadmins on the other hand will have a monitoring infrastructure in place, as described above.

 

I really don't think there is any valid argument for slicing up the disk.

Besides, both HP-UX and IRIX do a whole disk root approach. Never seen one of those run into trouble (although it's theoretically possible) in all my years.

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值