RAID Card Selection – JBOD Mode / Linux RAID

Home » CentOS » RAID Card Selection – JBOD Mode / Linux RAID
CentOS 6 Comments

I don’t think this is off topic since I want to use JBOD mode so that Linux can do the RAID. I’m going to hopefully run this in CentOS 5 and Ubuntu 12.04 on a Sunfire x2250
Hard to get answers I can trust out of vendors :-)

I have a Sun RAID card which I am pretty sure is LSI OEM. It is a 3G/s SAS1 with 2 external connectors like the one on the right here :

http://www.cablesondemand.com/images/products/CS-SAS1MUKBCM.jpg

And I have 2 x Sun J4400 JBOD cabinets each with 24 disks.

If I buy a new card that is 6G/s SAS2 with the same connector, can I connect my cabinets to it and have them work? Even if they only work at 3G/s I don’t care. I’ve also hit an issue with the number of logical devices allowed, and am wondering whether this might be a HW, FW or SW limitation if anyone knows. I want to run everything in JBOD mode and let Linux do the RAID. So for the first cabinet I ran this command 24 times to create a logical drive for each physical one

/usr/StorMan/arcconf create 1 logicaldrive max volume 0,X noprompt

Where X goes from 0 to 23 went great, created /dev/sd[c-z] and I was able to use those with mdadm to create 4 x RAID6 arrays and then a big RAID0 out of the 4 x RAID6. Works great!

Then I try to connect the 2nd cabinet and when I run the above arcconf command it tells me there are already 24 logical devices, and this is the max.

Anyone know whether that is HW, FW or SW?

Would a new card fix this problem? Anyone know for sure of a card with above connectors that has a JBOD mode that will support 48 drives and expose them all to Linux?

6 thoughts on - RAID Card Selection – JBOD Mode / Linux RAID

  • I don’t know anything specifically about your RAID card or enclosures, but I the following experience might help you nonetheless.

    Some Dell PERC Storage Controllers (specifically PERC5i and 6i in my case – which are LSI OEM I believe) there is no JBOD mode so I ended up setting a RAID0 striped volume for each drive. Then set up a Linux softraid volume on each drive with mdadm to chain them together in whatever RAID array type fits the job.

    Hopefully the previous tidbit of information helps you.

    That’s going to really drag if you have to configure a RAID0 for all
    48 disks … it’d be much easier if you could directly communicate with the drives. I’ve set up at most five or six RAID0 devices on one host and it’s not particularly enjoyable!

    —~~.~~-

  • Well using the command line tools it is not bad at all – but my question remains unanswered – how many logical devices can your card have? I’ve hit my limit at 24 but I have 48 drives. So I need to find a card that can do 48 logical devices.

    thanks

  • Look at ZFS information and lists. ZFS prefers JBOD and raw drive access, so there is plenty of information out there about which controllers work best that way.

    LSI 9211-8i being one of the more popular ones.

  • BTW, I read the specs on that and it says it is compatible with 6G and
    3G SAS which hopefully means it will work with my Sun J4400 SAS1
    shelf, right?

    I like that it is a JBOD-only card – that is exactly what I want

LEAVE A COMMENT