RAID.v7
| Installing | New Server | Mrepo | smartd | RAID | Hardening | YUM | Crontabs | LogWatch | systemctl | firewalld | CentOS 7 | |
|
Apache | Bind | Cacti | DHCP | mariadb | Samba | Sarg | Sendmail | Smokeping | Rsync | Work Apps | |
| Problems | VPN | VPN Win | Extras | Bash | MailScanner | Horde | Google CE | Wake Up | KVM | |||
| Other | Computer Lab | ISO2USB | aiContact | Google CE | Android | USB Live | SRS XML |
Contents
RAID
Hat tips:
http://www.tecmint.com/create-raid-6-in-linux/
Raid From Scratch Raid Problems
YUM
yum install mdadm
Then
systemctl start mdmonitor systemctl enable mdmonitor
Setup Raid
Starting up an existing RAID
Upgraded a server from centos 5 to 7.
sde now the new HD with the os and sd[abcd]1 is where the old RAID 5 is.
cat /proc/mdstat
gave me
md127 : active (auto-read-only) raid5 sda1[0] sdb1[1] sdd1[3] sdc1[2]
5860535808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
So to stop md127
mdadm --stop /dev/md127 mdadm: stopped /dev/md127
now
cat /proc/mdstat Personalities : [raid6] [raid5] [raid4] unused devices: <none>
Good. Now lets make the nod for md0
mknod /dev/md0 b 9 0
And
yum install mdadm
And enable in systemctl
systemctl start mdmonitor systemctl enable mdmonitor
We need the mdadm dir
mkdir /etc/mdadm
and the mdadm.conf file
mdadm --examine --scan /dev/sda1 >>/etc/mdadm/mdadm.conf
and the bits that are missing from this file
emacs /etc/mdadm/mdadm.conf
Add
DEVICE partitions CREATE owner=root group=disk mode=0660 auto=yes metadata=1 MAILADDR root
Now assemble the pieces, in my case
mdadm --assemble /dev/md0 /dev/sd[abcd]1
mdadm: /dev/md0 has been started with 4 drives. Lets have a look
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4]
md0 : active raid5 sda1[0] sdd1[3] sdc1[2] sdb1[1]
5860535808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
Mount this some where and have a look.
Getting rid of an existing RAID
Example is that we have a raid:
md126 : active (auto-read-only) raid6 sda[0] sde[1] sdg[2] sdf[3] sdb[5]
7813531648 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/5] [UUUU_U]
This should be md3
You could run
mdadm --detail /dev/md126
to have a look at the settings and staus for thie RAID.
In order to remove this device, first stop it by typing the following at a shell prompt:
umount /dev/md126 mdadm --stop /dev/md126 mdadm: stopped /dev/md126
Once stopped, you can remove the /dev/md126 device by running the following command:
mdadm --remove /dev/md126
I got:
mdadm: error opening /dev/md126: No such file or directory
As md126 is not a persiatant md.
Finally, to remove the superblocks from all associated devices, in my case I typed:
mdadm --zero-superblock /dev/sda1 /dev/sdb1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
Each of my HDs still has:
Device Boot Start End Blocks Id System /dev/sda1 2048 3907029167 1953513560 fd Linux raid autodetect
I was meant to have deleted the exisitng RAID in the previous steps, to check I ran
mdadm -E /dev/sd[abefgh]
I should have got
mdadm: no MD superblock detected on.....
Instead I got for all the drives
/dev/sda:
Magic : a92b4efc
Version : 1.2
Feature Map : 0x0
Array UUID : 167e7342:59c60030:19ff03c1:4182c54e
Name : openmediavault:MyRaidGroup
Creation Time : Fri Oct 17 13:31:02 2014
Raid Level : raid6
Raid Devices : 6
Avail Dev Size : 3906767024 (1862.89 GiB 2000.26 GB)
Array Size : 7813531648 (7451.56 GiB 8001.06 GB)
Used Dev Size : 3906765824 (1862.89 GiB 2000.26 GB)
Data Offset : 262144 sectors
Super Offset : 8 sectors
State : clean
Device UUID : 5b71255f:e8aa1f63:48a51404:1b225198
Update Time : Wed Nov 12 10:34:15 2014
Checksum : 777ac6ea - correct
Events : 3669
Layout : left-symmetric
Chunk Size : 512K
Device Role : Active device 0
Array State : AAAA.A ('A' == active, '.' == missing)
To totally clean out the HDs in question I ran this command
DO NOT JUST RUN THIS COMMAND, your command will differ depending on the HDs yoiu have for the raid.
dd if=/dev/zero of=/dev/sda bs=512 count=100 dd if=/dev/zero of=/dev/sdb bs=512 count=100 dd if=/dev/zero of=/dev/sde bs=512 count=100 dd if=/dev/zero of=/dev/sdf bs=512 count=100 dd if=/dev/zero of=/dev/sdg bs=512 count=100 dd if=/dev/zero of=/dev/sdh bs=512 count=100
Now
mdadm -E /dev/sd[abefgh]
gives me
mdadm: No md superblock detected on /dev/sda. mdadm: No md superblock detected on /dev/sdb. mdadm: No md superblock detected on /dev/sde. mdadm: No md superblock detected on /dev/sdf. mdadm: No md superblock detected on /dev/sdg. mdadm: No md superblock detected on /dev/sdh.
mdadm Commands
add
Problem:
md0 : active raid1 sdd1[1]
4192192 blocks super 1.2 [2/1] [_U]
Add back missing sdc1 to md0
mdadm --add /dev/md0 /dev/sdc1
Now
md0 : active raid1 sdc1[2] sdd1[1]
4192192 blocks super 1.2 [2/1] [_U]
[===============>.....] recovery = 76.6% (3214784/4192192) finish=0.1min speed=146126K/sec
Createing a new RAID6
Create the node
mknod /dev/md2 b 9 2
we are starting with 6 x 2TB drives all of them with a single partion.
/dev/sdx1 2048 3907029167 1953513560 fd Linux raid autodetect
To achive this on each drive
fdisk /dev/sda
Please follow the instructions as shown below for creating partition.
- Press ‘n‘ for creating new partition.
- Then choose ‘P‘ for Primary partition.
- Next choose the partition number as 1.
- Define the default value by just pressing two times Enter key.
- Next press ‘P‘ to print the defined partition.
- Press ‘L‘ to list all available types.
- Type ‘t‘ to choose the partitions.
- Choose ‘fd‘ for Linux raid auto and press Enter to apply.
- Then again use ‘P‘ to print the changes what we have made.
- Use ‘w‘ to write the changes.
Then the same for the other disks.
Now we have:
mdadm -E /dev/sd[abefgh] /dev/sda: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sdb: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sde: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sdf: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sdg: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd) /dev/sdh: MBR Magic : aa55 Partition[0] : 3907027120 sectors at 2048 (type fd)
Also I ran
mdadm -E /dev/sd[abefgh]1
That sda1 should be like the others
/dev/sda1: MBR Magic : aa55 mdadm: No md superblock detected on /dev/sdb1. mdadm: No md superblock detected on /dev/sde1. mdadm: No md superblock detected on /dev/sdf1. mdadm: No md superblock detected on /dev/sdg1. mdadm: No md superblock detected on /dev/sdh1.
To over come this I ran again and made bigger
dd if=/dev/zero of=/dev/sda bs=512 count=100000
create the partition table
/fdisk /dev/sda
This got us the desired result
mdadm: No md superblock detected on /dev/sda1.
Now
mdadm --create /dev/md2 --level=6 --raid-devices=6 /dev/sda1 /dev/sdb1 /dev/sde1 /dev/sdf1 /dev/sdg1 /dev/sdh1
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md2 started.
cat /proc/mdstat
Personalities : [raid1] [raid6] [raid5] [raid4]
md2 : active raid6 sdh1[5] sdg1[4] sdf1[3] sde1[2] sdb1[1] sda1[0]
7813527552 blocks super 1.2 level 6, 512k chunk, algorithm 2 [6/6] [UUUUUU]
[>....................] resync = 0.1% (2284544/1953381888) finish=583.9min speed=55683K/sec
Also you can
watch -n1 cat /proc/mdstat
Verify the raid devices using the following command.
mdadm -E /dev/sd[abefgh]1
Next, verify the RAID array to confirm that the re-syncing is started.
mdadm --detail /dev/md2
/dev/md2:
Version : 1.2
Creation Time : Wed Nov 12 11:19:11 2014
Raid Level : raid6
Array Size : 7813527552 (7451.56 GiB 8001.05 GB)
Used Dev Size : 1953381888 (1862.89 GiB 2000.26 GB)
Raid Devices : 6
Total Devices : 6
Persistence : Superblock is persistent
Update Time : Wed Nov 12 11:19:11 2014
State : clean, resyncing
Active Devices : 6
Working Devices : 6
Failed Devices : 0
Spare Devices : 0
Layout : left-symmetric
Chunk Size : 512K
Resync Status : 0% complete
Name : server.qualchem.co.nz:2 (local to host server.qualchem.co.nz)
UUID : 5a5bdba5:ed8bc7a7:f8c8d9bc:5f9da323
Events : 0
Number Major Minor RaidDevice State
0 8 1 0 active sync /dev/sda1
1 8 17 1 active sync /dev/sdb1
2 8 65 2 active sync /dev/sde1
3 8 81 3 active sync /dev/sdf1
4 8 97 4 active sync /dev/sdg1
5 8 113 5 active sync /dev/sdh1
Let's format the array.
mkfs.xfs /dev/md2 mkfs.xfs: /dev/md2 appears to contain an existing filesystem (xfs). mkfs.xfs: Use the -f option to force overwrite.
Damn it still things there is a file system there from the old RAID array, we we will force it.
mkfs.xfs /dev/md2 -f
meta-data=/dev/md2 isize=256 agcount=32, agsize=61043072 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=1953378304, imaxpct=5
= sunit=128 swidth=512 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=521728, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
Now let's mount this
mount /dev/md2 /zone/
And lets write a file to it
touch /zone/iamhere dir /zone
Okay we need to make this permananet. I have been having some issues here, as I had done all the above pretty much the same with our the zeroing, and on the reboot md2 was gone. So I ran the following:
mdadm --examine --scan /dev/sda1 >> /etc/mdadm.conf
this put
ARRAY /dev/md/2 metadata=1.2 UUID=5a5bdba5:ed8bc7a7:f8c8d9bc:5f9da323 name=server.somewhere.co.nz:2
Now for a reboot
After the reboot, all is good. The raid is still there and correctly mounted.
FINALLY
Run this command
mdadm -Evvvvs
And put the results in a very safe place.
Using Parted
2x 3T as RAID 1
In this example I have 2x 2TB /dev/sdb & /dev/sdc
parted
parted /dev/sdb GNU Parted 3.1 Using /dev/sdb Welcome to GNU Parted! Type 'help' to view a list of commands. (parted) mklabel gpt Warning: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue? Yes/No? yes (parted) mkpart primary ext4 0% 100% (parted) print Model: ATA ST3000VN000-1HJ1 (scsi) Disk /dev/sdb: 3.00TB Sector size (logical/physical): 512B/4096B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 0.00TB 3.00TB 3.00TB primary (parted) set 1 raid on (parted) quit
Now do the same for /dev/sdc.
RAID
Next create the node, in my case it is /dev/md1
mknod /dev/md1 b 9 1 mdadm --create /dev/md1 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 mdadm: array /dev/md1 started.
Lets have a look at things:
cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [raid1]
md1 : active raid1 sdc1[1] sdb1[0]
2930133824 blocks super 1.2 [2/2] [UU]
[>....................] resync = 0.2% (5934208/2930133824) finish=285.7min speed=170572K/sec
md0 : active raid5 sdf1[2] sdg1[3] sde1[1] sdd1[0]
5860535808 blocks level 5, 256k chunk, algorithm 2 [4/4] [UUUU]
unused devices: <none>
mdadm
Lets update mdadm.conf
mdadm --examine --scan /dev/sdb1 >> /etc/mdadm/mdadm.conf
/etc/mdadm/mdadm.confnow contains a new line:
ARRAY /dev/md/1 metadata=1.2 UUID=1e50affe:9c90758c:91cc9002:24f20e3a name=river.backup.geek.nz:1
Lets do a reboot to see if this all comes back up okay.
File System
mkfs.xfs /dev/md1
meta-data=/dev/md1 isize=256 agcount=4, agsize=183133364 blks
= sectsz=4096 attr=2, projid32bit=1
= crc=0
data = bsize=4096 blocks=732533456, imaxpct=5
= sunit=0 swidth=0 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=0
log =internal log bsize=4096 blocks=357682, version=2
= sectsz=4096 sunit=1 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
fstab
Adding a new entry.
/dev/md0 /zone xfs defaults 0 0
mount
- Did you find this page useful?
- Do you have an issue that you have not yet fixed?
We can do this for you.
I am available for technical support. Please follow this link. Tech Support Request.
+64-6-880-0000 : ++1-808-498-7146 : help@ai.net.nz
Getting us to help you