Skip to content

Conversation

@mkudlej
Copy link

@mkudlej mkudlej commented Jan 30, 2018

No description provided.

Copy link
Contributor

@ltrilety ltrilety left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks fine

Copy link
Contributor

@fbalak fbalak left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Just a few questions...
Also please add PR description.

# Get number of failed sectors till next failure
repeated_set=$(echo "${repeat_failure_every}*1024*1024/${sector_size}" | bc)

#block100=10
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is there this commented code?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code is here so people can try it with human easily countable data sizes. Should I remove it?

@dahorak
Copy link
Contributor

dahorak commented Feb 5, 2018

@mkudlej are the created devices "permanent"? (do they survive reboot?)

@mkudlej
Copy link
Author

mkudlej commented Feb 5, 2018

@dahorak I honestly don't know if they are permanent. I think that these changes are not permanent. It is similar as any other device mapper changes made by lvm. You should store them in configuration files, see conf files for lvm disks.

@dahorak
Copy link
Contributor

dahorak commented Feb 5, 2018

@mkudlej would it be possible to create it permanent (store the configuration in configuration files) directly in the qe-dmdisks role or do you think it doesn't make much sense?

@dahorak
Copy link
Contributor

dahorak commented Feb 6, 2018

@mkudlej I wasn't able to create any volume on the devices :-/ (I've tried it multiple times with similar results).
It seems to me, like it is because of the simulated failures, which prevent even creation of the partitions/bricks and volume. But I'm not sure how to fix it...

@mbukatov mbukatov changed the title add first version of setup for creating disk with failures [wip] add first version of setup for creating disk with failures Dec 19, 2018
@mbukatov
Copy link
Contributor

mbukatov commented Mar 8, 2019

I guess we are trying too hard here, mkfs fails on such device, so there is nothing left to try.

@mbukatov
Copy link
Contributor

Actually it would be better to try to:

  • stop gluster daemons
  • umount brick volume
  • setup dm flakey based block device over one for just umounted brick device
  • edit fstab to use new flakey device, and mount it again
  • start gluster again

Errors simulated by the flakey device should not prevent mounting of the brick and gluster daemon started.

@mbukatov mbukatov removed their request for review May 31, 2019 15:11
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants