mounting remote filesystem via nfs
How-To Make the root filesystem read-only
Introduction
There are several reasons why you might want to make your root file system read only. I wanted to have a system on a flash disk, and since flash disks are damaged after repeated read-write circles the read-only root is a very nice solution. Other reasons why you would want to make your root partition read only include:
- If you want maximum security for your server, and want it to boot from a read only medium (i.e. a CD-ROM)
- If you want to make your own live-cd
- To avoid that power loss or system crash damage the root partition.
- If you want to mount the same nfsroot on several thin clients
The following procedure is what i did to turn my SuSE 10.1 root file system to read-only. It should work on both earlier and later versions but i haven’t tested it yet. There could be better/more elegant solutions, if you think that something is missing please fill free to edit this howto.
Acknowledgments
Some of the information on this howto where found here.
The /proc Filesystem
One of the ways in determining what is going on inside the UNIX kernel is to make use of the /proc filesystem. Some of that information may be disk arrays connected to your server or querying kernel parameters. The /proc filesystem offers an interface to important kernel data structures that provide information about the state of a running UNIX kernel by use of special files. The System Administrators uses the UNIX cat command to list the contents of those special files.
Under Linux, it is also possible to set certain kernel parameters by using the echo command. For example, to change the kernel parameter value used to control the maximum socket receive buffer size, net.core.rmem_default, to 262144, use the following:
# echo 262144 > /proc/sys/net/core/rmem_default
It is important to understand when setting kernel parameters in Linux using the echo command, (as in the above example), these settings need to be applied each time the system boots. Some distributions of Linux already have a setup method for this during boot. On Red Hat, this can be configured in /etc/sysctl.conf: (like: net.core.rmem_default = 262144).
This article attempts to list some of the more common files used by System Administrators. Although most of these special files are general enough to apply to all flavors or UNIX (Solaris, Linux, HP-UX, etc.), I indicate those that only apply to a particular platform.
Troubleshooting the “device is busy” Error Attempting to umount a Disk
Before attempting to dismount a filesystem, it must be inactive. If “any user” has one of the filesystem’s directories as their current directory or has any file within the filesystem open, you will receive an error message, like the one below, when attempting to unmount the filesystem:
# umount /dev/dsk/c0t2d0s7 umount: /dev/dsk/c0t2d0s7: device is busy
Well, the fuser command to the rescue. The fuser command may be used to determine which files or directories within a filesystem are currently in use and to identify the processes and users that are using them.
Why is My Root File System Read-Only?
RAID Theory
RAID: That it is; What it does
RAID is something all of us have heard about but very few of us understand, at leastfully. So lets get off on the right foot. RAID stands for Redundant Array of Inexpensive (or Independent) Disks. There are a dozen or so theories as to why RAID was conceptualized, but the most accepted reason is that once upon a time, not long ago, disks were small and expensive. In order to provide a large amount of data you had to have a bunch of disks all mounted in a single file tree, which was a real mess. So, to solve this problem RAID was born. With RAID you could take a bunch of disks at create a big virtual disk out of them which made administration much easier and more logical. Over time RAID grew to include new solutions for old problems, like disk performance, redundancy, and scalability. And for any skeptics out there, tell me where I can get a 10 terabyte disk drive…. that should make us all agree that RAID has a place in the universe.
Just to try and clear things up a bit more, lets see why we don’t simple just need RAID, but actually WANT it. Let’s say we’re building a production NFS server that will be used to store all of our software. We’ll need this system to extremely stable, because if it goes down no one can get or submit code. With RAID we could build a single virtual disk (volume) that would meet our need for 200G of disk. But we also what to make sure that if disks die that we don’t go down. So we use a mirror (another set of disks identical to the first set of disks). If a disk dies we’re okey, because the mirror will take over; we essentially have 2 identical sets of the same data which are constantly kept up to date. See? Using these 2 simple RAID concepts we’ve achieved both availability (thats our mirror saving us from disk crashes) and increased capacity (we’ve got a whole bunch of disks working together, which is cheaper than buying a single 200G disks… if you can find one!).
Okey, enough of the bad examples. Lets look at the different forms of RAID in use today.
RAID: The Details