User Tools

Site Tools


raid

Differences

This shows you the differences between two versions of the page.

Link to this comparison view

Next revision
Previous revision
raid [2021/12/05 22:29] – created vissieraid [2021/12/08 04:36] (current) – [Trying to get more performance] vissie
Line 14: Line 14:
   cat /proc/mdstat   cat /proc/mdstat
 Output Output
 +<code Bash [enable_line_numbers="false",highlight_lines_extra="2,4"]>
   Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10]    Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
   md0 : active raid5 sdc[3] sdb[1] sda[0]   md0 : active raid5 sdc[3] sdb[1] sda[0]
Line 20: Line 21:
  
   unused devices: <none>   unused devices: <none>
 +</code>
 As you can see in the first highlighted line, the /dev/md0 device has been created in the RAID 5 configuration using the /dev/sda, /dev/sdb and /dev/sdc devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes. As you can see in the first highlighted line, the /dev/md0 device has been created in the RAID 5 configuration using the /dev/sda, /dev/sdb and /dev/sdc devices. The second highlighted line shows the progress on the build. You can continue the guide while this process completes.
 +
 +====Create and Mount the Filesystem====
 +Next, create a filesystem on the array:
 +  sudo mkfs.ext4 -F /dev/md0
 +Create a mount point to attach the new filesystem:
 +  sudo mkdir -p /mnt/md0
 +You can mount the filesystem by typing:
 +  sudo mount /dev/md0 /mnt/md0
 +Check whether the new space is available by typing:
 +  df -h -x devtmpfs -x tmpfs
 +
 +====Save the Array Layout====
 +To make sure that the array is reassembled automatically at boot, we will have to adjust the /etc/mdadm/mdadm.conf file.
 +
 +Before you adjust the configuration, check again to make sure the array has finished assembling. Because of the way that mdadm builds RAID 5 arrays, if the array is still building, the number of spares in the array will be inaccurately reported:
 +  cat /proc/mdstat
 +Output
 +  Personalities : [raid1] [linear] [multipath] [raid0] [raid6] [raid5] [raid4] [raid10] 
 +  md0 : active raid5 sdc[3] sdb[1] sda[0]
 +        209584128 blocks super 1.2 level 5, 512k chunk, algorithm 2 [3/3] [UUU]
 +
 +  unused devices: <none>
 +The output above shows that the rebuild is complete. Now, we can automatically scan the active array and append the file by typing:
 +  sudo mdadm --detail --scan | sudo tee -a /etc/mdadm/mdadm.conf
 +Afterwards, you can update the initramfs, or initial RAM file system, so that the array will be available during the early boot process:
 +  sudo update-initramfs -u
 +Add the new filesystem mount options to the /etc/fstab file for automatic mounting at boot:
 +  echo '/dev/md0 /mnt/md0 ext4 defaults,nofail,discard 0 0' | sudo tee -a /etc/fstab
 +
 +Your RAID 5 array should now automatically be assembled and mounted each boot.
 +
 +=====Trying to get more performance=====
 +  echo 16384 > /sys/block/md0/md/stripe_cache_size
 +  sudo blockdev --setra 65536 /dev/md0
 +
 +Increase speed limits
 +The easiest thing to do is to increase the system speed limits on raid. You can see the current limits on your system by using these commands:
 +  sudo sysctl dev.raid.speed_limit_min
 +  sudo sysctl dev.raid.speed_limit_max
 +These values are set in Kibibytes per second (KiB/s).
 +
 +You can put them to high values:
 +  sudo sysctl -w dev.raid.speed_limit_min=100000
 +  sudo sysctl -w dev.raid.speed_limit_max=500000
 +
 +Increase stripe cache size
 +By allowing the array to use more memory for its stripe cache, you may improve the performances. In some cases, it can improve performances by up to 6 times. By default, the size of the stripe cache is 256, in pages. By default, Linux uses 4096B pages. If you use 256 pages for the stripe cache and you have 10 disks, the cache would use 10*256*4096=10MiB of RAM. In my case, I have increased it to 4096:
 +  echo 4096 | sudo tee -a /sys/block/md0/md/stripe_cache_size
 +The maximum value is 32768. If you have many disks, this may well take all your available memory. I don't think values higher than 4096 will improve performance, but feel free to try it ;)
 +
 +Increase read-ahead
 +If configured too low, the read-ahead of your array may make things slower.
 +
 +You can see get the current read-ahead value with this command:
 +
 +  sudo blockdev --getra /dev/md0
 +These values are in 512B sector. You can set it to 32MB to be sure:
 +
 +  sudo blockdev --setra 65536 /dev/md0
 +This can improve the performances, but don't expect this to be a game-changer unless it was configured really low at the first place.
 +
 +
 +
raid.1638772166.txt.gz · Last modified: 2021/12/05 22:29 by vissie