Hi.
This is about the rate and bandwidth of RAID scrub triggered by lvchange --syncaction check.
When scheduling periodic scrubs, it might not be easy to anticipate when the performance hit will not matter; with large volumes, the scrub can take a long time. Linux I/O scheduling is good, but still, I would like to be able to lower the rate of a scrub to a crawl to keep all the bandwidth for myself when I want it and then let the scrub go at full speed when nobody is using the computer.
With a MD RAID, I can just adjust the sysctl dev.raid.speed_limit_max. It is very lightweight, it can be done from pre/post backup scripts or screensaver triggers.
With a LVM RAID volume, the rate of the scrub can be controlled with lvchange --maxrecoveryrate. Unfortunately, it has the side effect of stopping the scrub; it will not resume, it must then start anew.
Which is why I would like you to kindly consider adding a feature to alter the maximum bandwidth allocated to a scrub on the fly. IMHO, the most convenient way would be to add an option to have LVM follow dev.raid.speed_limit_max like MD, but any way would be useful.
Barring that, or on top of that, I would like to kindly consider adding a way of resuming a scrub where it was interrupted.
Thanks.
Hi.
This is about the rate and bandwidth of RAID scrub triggered by
lvchange --syncaction check.When scheduling periodic scrubs, it might not be easy to anticipate when the performance hit will not matter; with large volumes, the scrub can take a long time. Linux I/O scheduling is good, but still, I would like to be able to lower the rate of a scrub to a crawl to keep all the bandwidth for myself when I want it and then let the scrub go at full speed when nobody is using the computer.
With a MD RAID, I can just adjust the sysctl
dev.raid.speed_limit_max. It is very lightweight, it can be done from pre/post backup scripts or screensaver triggers.With a LVM RAID volume, the rate of the scrub can be controlled with
lvchange --maxrecoveryrate. Unfortunately, it has the side effect of stopping the scrub; it will not resume, it must then start anew.Which is why I would like you to kindly consider adding a feature to alter the maximum bandwidth allocated to a scrub on the fly. IMHO, the most convenient way would be to add an option to have LVM follow
dev.raid.speed_limit_maxlike MD, but any way would be useful.Barring that, or on top of that, I would like to kindly consider adding a way of resuming a scrub where it was interrupted.
Thanks.