CentOS 7 RAID Tutorial?

Home » CentOS » CentOS 7 RAID Tutorial?
CentOS 26 Comments

I want to set up a new CentOS install using version 7 and would like
to experiment with various RAID levels. Anyone care to point out a
tutorial?

TIA

Dave

26 thoughts on - CentOS 7 RAID Tutorial?

  • how many drives do you have for this experiment? whats the target usecase for the file systems on the raid(s)? whats the level of data resiliance required by said use case?

    Raid only protects against one specific sort of failure, where an entire disk drive fails. It doesn’t protect against data corruption, or system failure, or software failure or any other catastrophes.

  • Even more: system failure or power loss is more likely to destroy all data on software RAID than on a single drive when there is a lot of IO present
    (to the best of my understanding, loss of cache software RAID is using is more catastrophic compared to journaled filesystem under same circumstances – somebody may correct me). So, there may be worth thinking about hardware RAID.

    Just my 2c.

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • I think an essential feature of any md RAID that’s not a RAID1 is a UPS
    and a mechanism for a clean shutdown in case of extended power failure.
    (An md RAID1 might be okay in this instance, but I wouldn’t want to risk it.) But this is true for any RAID, which is why many controllers come with a BBU (and if you don’t have a BBU on your hardware RAID controller, then you absolutely need the UPS setup I described).

    OTOH, the OP wasn’t clear on what he was doing; perhaps he is just playing around, and doesn’t care about data preservation at this time. If you’re just testing performance then data integrity in the face of a power failure is less of a concern.

    –keith

    –keith

  • Valeri makes an excellent point, which I would like to elaborate on;

    Hardware RAID *with* flash-backed/battery-backed cache. I find it endlessly fascinating how many machines out there have hardware RAID
    with BBU/FBU. When using write-back caching without a battery leaves you in no better position.

    Note that if you do get hardware RAID with BBU/FBU, be sure the cache policy is set to “Write-Back with BBU” (or your controllers wording). The idea here is that, if the battery/caps fail/drain, the controller switches to write-through (no) caching.

    I’m not so familiar with software RAID, but I would be surprised if there isn’t a way to force write-through caching. If this is possible, then Valeri’s concern can be addressed (at the cost of performance).

    digimer

  • A UPS is certainly better than nothing, but I would not consider it safe enough. A BBU/FBU will protect you if the node loses power, right up to the failure of the PSU(s). I’ve seen shorted cable harnesses taking out servers with redundant power supplies, popped breakers in PDUs/UPSes, knocked out power cords, etc. So a UPS is not a silver-bullet to safe write-back caching in software arrays. Good, yes, but not perfect.

  • software raid on enterprise grade JBOD *is* write-through caching.
    the OS will only cache writes til an fsync/fdatasync/etc and then it will flush them to the md device, which will immediately flush them to the physical media. where it goes sideways is when you use cheap consumer grade desktop drives, those often lie about write complete to improve windows performance… but these would be a problem with or without mdraid, indeed, they would be a problem with hardware raid, too.

    this is why I really like ZFS (on solaris and bsd, at least), because it timestamps and checksums every block it writes to disk… a
    conventional raid1, if the two copies don’t match, you don’t know which one is the ‘right’ one. the ZFS scrub process will check these timestamps and crc’s, and correct the ‘wrong’ block.

    I did a fair bit of informal(*) benchmarking of some storage systems at work before they were deployed. using a hardware raid card such as a LSI Megaraid 9260 with 2GB BBU cache, (or HP P410i or similar) is most certainly faster at transactional database style random read/write testing than using a simple SAS2 JBOD controller. But using mdraid with the Megaraid configured just as a bunch of disks, gave the same results if writeback caching was enabled in the controller. At different times, using different-but-similar SAS2 raid cards, I
    benchmarked 10-20 disk raids in various levels like 10, 5, 6, 50, and
    60, built with 7200RPM SAS2 ‘nearline server’ drives, 7200rpm SATA
    desktop drives, and 15000rpm SAS2 enterprise server drives. For an OLTP style database server under high concurrency and high transaction/second rates, raid10 with lots of 15k disks is definitely the way to go. for bulk file storage that’s write-once and read-mostly, raid 5, 6, 60 perform adequately.

    (*) my methodology was ad-hoc rather than rigorous, I primarily observed trends, so I can’t publish any hard data to back these conclusions.. My tests including PostgreSQL with pgbench, and various bonnie++ and iozone tests. most of these tests were on Xeon X5600
    class servers with 8-12 cores, and 24-48GB ram.

  • Quoting Digimer :

    This is a pretty interesting discussion but has not revealed an
    on-line tutorial. Anyone?

    Dave

  • My own experience is very limited, but I have heard from a handful of reliable sources who are very happy with ZFS. As Digimer noted, much of the challenge is in the licensing, not a technical issue.

    ZFS has been packaged for RHEL/CentOS:

    http://zfsonlinux.org/epel.html

    So it should be fairly straightforward to get things working. (If you’ve installed your own kernel you may need the generic RPM packages instead.)

    –keith

  • It exists for ubutnu so I can use it from a ppa for testing. I would like to understand more about this license issue. If you can sound me with more about it will help me understand the issue.

    Thanks, Eliezer

  • Quoting Digimer :

    Dear All,

    This list reminds me of the wizards in the Terry Pratchett novels –
    confronted with a need to take action they would by far rather discuss
    all possibilities, however remote, than address the need in a simple
    way, after all they’re WIZARDS.

    The discussion has been pretty interesting, I’ve figured out what I
    wanted to know through other means.

    Thanks for the perspectives.

    Dave

  • Hi Dave,

    I can understand your feeling, but I need to say that “storage”, as a topic, is a very big one. People form entire careers around the topic. So when discussing storage without a specific context, conversations like this are inevitable.

    All the points that have been made in this thread are valid and important. So I suppose the better thing would be, if you were still looking for answers, would be to ask the question with a particular use-case in mind. That would allow people to stay more focused in their answers.

    Cheers!

    digimer

  • Page one. Define RAID. Page 2, who should use it. Configuring Raid (after a few more wasted pages). Read the man page. It seems too sparse for a beginner and, with such sparseness, not much use to an experienced admin.

    I can’t see that being very useful to someone with little or no RAID
    experience. (And actually, it’s so sparse that the experienced won’t need the little bit of suggestion it gives.)

    In contrast, the CentOS wiki article, if running CentOS 5 or
    6, gives an easy to follow guide, complete with commands one might actually type. (At least some of the instructions don’t seem to work with CentOS 7
    though)

  • Did you read the topic of this thread?
    This is about version 7 ;)

    furthermore, which wiki article are you referring to?
    the search yields many results like:
    http://wiki.CentOS.org/HowTos/CentOS5ConvertToRAID?highlight=%28raid%29

    which might be not what you want, depending on what you want. I’m no fan of copy and paste tutorials if they are not used just for very specific use cases. and just using a CentOS 5 tutorial on CentOS 7 seems not to be the best way to start things.

  • Sorry, I wasn’t clear, and I apologize. My point is that the tutorial, which at least judging from my experience, won’t work in 7, is detailed and helpful for both novice and the more experienced. In contrast, the upstream seems as if they basically paid someone to dress up “read the man page,” in
    10 pages.

    Again, apologies. Some mental shorthand on my part, as I was recently using that article in a work situation. I meant this one.

    http://wiki.CentOS.org/HowTos/Install_On_Partitionable_RAID1

    Which I suppose you can call it cut and paste. Now, I did, for my own knowledge, see if I could get that to work on CentOS-7, but I couldn’t.

    I don’t know how much knowledge the OP has or doesn’t have. I don’t think it’s unreasonable to expect instructions to include examples and commands, but we’re now very much outside the scope of this thread. :)

    TL;DR
    I wasn’t clear. My point, in one sentence is that I don’t consider the RH
    documentation very good, and a tutorial for CentOS 7, written in the style of the tutorial to which I link, would be far more helpful.

  • Quoting Digimer :

    Quoting Digimer :

    Ok, maybe one more. I manage a server with CentOS 5.10 that has a raid
    10 array with a hot spare. It was well set up by someone else and has
    worked very well. It also has a Xen kernel and several VMs, also all
    working well.

    The age of the OS and accumulated cruft in the application side,
    together with the absence of the person who did the original setup,
    have me thinking about a new clean install – first on a transitional
    box for continuity, then a new config on the current hardware with the
    same basic design but more up-to-date; we are now a long way from
    version 5.3.

    I had seen reference to a much improved raid installation procedure in
    version 7. I can now confirm that the process is indeed much simpler
    and so I have been able to get on with performance testing in various
    scenarios, which was what I wanted to do. My experience has been that
    of you want a tutorial on any topic the internet is flooded with the.
    But I didn’t see one for this topic, so I thought a routine query on
    this list would get me a starting point.

    That didn’t exactly happen. But there’s lots of food for thought in
    the answers I got and I’ll be able to take at least some of that
    forward. And if I’m motivated enough may I’LL make the tutorial!

    Dave

  • Thanks for the clarification and the link!

    No need to apologize, I tend to forget thinks myself from time to time.

    kind regards

    Sven

    —–BEGIN PGP SIGNATURE—