Monday 16 May 2011

RAID

RAID Implementation
RAID 0 or Striping: It provides improved performance and additional storage but no fault tolerance. Hence simple stripe sets are normally referred to as RAID 0. Any disk failure destroys the array, and the likelihood of failure increases with more disks in the array (at a minimum, catastrophic data loss is almost twice as likely compared to single drives without RAID). A single disk failure destroys the entire array because when data is written to a RAID 0 volume, the data is broken into fragments called blocks. The number of blocks is dictated by the stripe size, which is a configuration parameter of the array. The blocks are written to their respective disks simultaneously on the same sector. This allows smaller sections of the entire chunk of data to be read off the drive in parallel, increasing bandwidth. RAID 0 does not implement error checking, so any error is uncorrectable. More disks in the array means higher bandwidth, but greater risk of data loss.
It require minimum 2 or more hard disks to create a single high performance volume. Array size equals the sum of all disks in array. Excellent performance (as blocks are striped). No redundancy (no mirror, no parity). Don’t use this for any critical system.

RAID 1 or Mirroring: While any number of disks may be used, many implementations deal with only 2.[citation needed] The array continues to operate as long as at least one drive is functioning. With appropriate operating system support, there can be increased read performance, and only a minimal write performance reduction; implementing RAID 1 with a separate controller for each disk in order to perform simultaneous reads (and writes) is sometimes called multiplexing (or duplexing when there are only 2 disks).
It requires minimum 2 or more hard disks and used even numbers of hard disks. Array size equal the size of the smallest disk used. Good performance (no striping, no parity). Excellent redundancy (as blocks are mirrored).

RAID 5 or Striping with Parity: Distributes parity along with the data and requires all drives but one to be present to operate. The array is not destroyed by a single drive failure. Upon drive failure, any subsequent reads can be calculated from the distributed parity such that the drive failure is masked from the end user. However, a single drive failure results in reduced performance of the entire array until the failed drive has been replaced and the associated data rebuilt.
It requires minimum 3 or more hard disks. Good performance (as blocks are striped). Good redundancy (distributed parity). Best cost effective option providing both performance and redundancy features. Use this for DB that is heavily read oriented. Write operations will be slow.

Implementing RAID 1:
Create 2 additional partitions of around 512MB (or as per requirement, here I used 512MB for testing) each and set that partition type of “Linux RAID”.
Use fdisk to create 2 logical partitions
Set the Partition Type (t) to fd
Save and exit from fdisk.

# partprobe                          (For kernel uses the new partition)
# mdadm –C /dev/md0 –a yes –l 1 –n 2 /dev/sda{6,7}

         (C= Create a new array. a= Auto, to create device file if needed. l= Set RAID level. n= Specify the number of active devices in the array.)
# mkfs.ext3 /dev/md0

            Mount the filesystem:
# mkdir /data
# mount /dev/md0 /data

            Copy some files on it.
# cp –a /lib /data

Get information about current RAID configuration.
# mdadm – –detail /dev/md0
Put entry of that new partition in fstab file for permanent mount.

            Failure the disk: Use mdadm – –fail to mark one of the partitions as faulty. Use mdadm – –remove to remove this partition from the RAID array. Check /proc/mdsat and /var/log/messages to see how the system reacts to this error.
# mdadm – –fail /dev/md0 /dev/sda7
# cat /proc/mdstat
# mdadm – –remove /dev/md0 /dev/sda7

            Create a new partition and add it on RAID array.
# mdadm – –a /dev/md0 /dev/sda8
# cat /proc/mdstat


Add a SCSI Hard Disk without restart the system.
# echo "- - -" > /sys/class/scsi_host/host0/scan
# fdisk –l
# tail –f /var/log/messages

No comments:

Post a Comment

Boot to UEFI Mode or legacy BIOS mode

Boot to UEFI Mode or legacy BIOS mode Choose UEFI or legacy BIOS modes while installing Windows. After Windows is installed, if you nee...