Seems I need to disable RAID, but don’t find the option in the bios.
4ea 1 terabyte drives in raid 10 is the current setup.
But I don’t find any method to stop the raid.
Norman Schklar wrote:
Ok, I missed the beginning of this thread. Is it currently using Intel
(aka fakeRAID)? If so, it’s *not* in the BIOS, but rather you have to watch as it comes up, before it gets to the CentOS boot. You’ll see something that tells you to hit , and in there, in the firmware, you’ll disable it.
Note that if you do, I believe that everything will be gone, and you’ll have to rebuild or restore. I don’t *think* software RAID will recognize what the fakeRAID left, though I could be wrong.
This was the first of the thread.
This is a new install, so nothing to lose. Centor 6.4 doesn’t find the drives. I use Ctrl E, to open the raid console but it doesn’t have an option to disable. So each time it boots I get the raid init. Not sure how you tell if it’s FakeRaid. It is using LSI drivers. I think what I want to do is stop the raid and use CentOS management. But can’t find where to stop the raid. There may be a better solution, but what I’ve read so far says to disable the on board raid.
Oh. You’re using LSI? What’s the hardware card? I don’t think fakeRAID, the Intel on-motherboard thing, uses them. If you’ve got a *real* RAID
card in there, then you *should* use it – you don’t need the software RAID.
For that, what you need to do is to create logical drives – follow the steps, chose the type of RAID, etc. in the firmware. *THEN*, once you’ve created the logical drives, when you reboot, the systems *will* see them as though they were physical drives.
Does what I’m saying make sense to you?
Yes it makes sense. But it came configured for Raid 10 2 ea 2 terabye drives. CentOS 6.4 doesn’t recognize any drive..
Please stop top posting.
Norman Schklar wrote:
Does the system have other drives than the RAID? You mention it booting –
you *are* aware that with RAID 10, you’ll have something like 1.6TB
I am aware of the 1.6tb. Only the four 1tb drives + DVD
But I want to just stop the Raid all together. Looking for the “how to”
turn off raid on boot.
Norman Schklar wrote:
Ah, NOW the light dawns. You’re going to *adore* the answer (not!): break the RAID, and make each individual drive a logical drive, and I think you’ll be forced to tell it RAID 0. That’s the only way I know of.
So just to clarify, you are actually entering the proper RAID interface for accessing your drives, right?
A new dell with an add on raid card will, in some cases, have two raid interfaces during boot up. One for stock internal RAID, and one for the LSI RAID. If you have the upgraded raid card, you maybe be entering the wrong interface for accessing the raid. Also, if this is new, you could easily get a chat going with Dell to help break the RAID like the previous answer suggested.
Either way the RAID card driver is not supported with the Linux distro you’ve chosen and would need to be added. It’s not hard, just takes some googling and use of CentOS.org howtos
So, question is, when in the RAID interface that you actually see, you cannot destroy what is there? If that’s the case I believe the raid interface is reading “RAID” or member from the beginning of the hard drives and they will be unusable until you break the raid in the actual interface.
you need to know WHICH LSI logic raid controller this system has before you can get a straight answer. the Megaraid 924x/926x/927x/928x stuff is way different than the 920x/921x HBA stuff or the 97xx 3Ware based stuff. like, COMPLETELY different. megaraid in particular is a pain to configure but performs quite well.
if its megaraid, they have an AWFUL ‘web’ style GUI in the BIOS, or a megacli command line which is nearly as awful. in either one, you’d need to DELETE the existing raid10 logical drive, then use the command or gui to ‘convert all unassigned drives to JBOD’, which is something like…
(where a0 is controller 0)
I pulled 3 of 4 drives. Restarted and it’s loading CentOS. Now I need to know how to setup the Linux Raid. Do I finish a complete server setup then modify the software raid, or do I
need to do it on the front end?
Thanks for the help. Norm
I plugged in the other 3 drives and went to the raid setup in CentOS. I created a raid 5 with the three disks. It doesn’t allow using the 1st os boot disk in the raid. Should I use a usb drive to host an install then once it’s running install the system again In the raided drives?
The easiest way I know is during the installation you choose the disks and create raid members, etc. Anything after the fact, for your first time, is extensive and not likely to work the first time around.
Try the install again and create the raid groups during installation.
booting from soft raid has limitations. the /boot partition containing grub and the kernel either can’t be raid or it can be mirrored, while the root partition can be full raid
I generally avoid raid5/6 except for large scale bulk nearline archival storage, and use raid1 or 10 for all ‘operational’ stuff. I also tend to use 2 drives just for the OS and software, then seperate drives in whatever appropriate raid configuration for your large data (sql databases, websites, file server spaces, etc)…. but I also don’t like deploying raid without online hot spares. raid’s only function is uptime availability in face of drive failures, and to aggreagate many drives into a single larger volume with potentially higher performance, its NOT
a substitute for backups.
The primary problem I have is that anaconda does not recognize my raid 10. To get past that problem, I broke the Raid and used one drive. I didn’t find a place to adjust the drives during the install onto only on drive. This is a new install, so not much to mess up.
CentOS 6.4 on an Intel server.
CentOS 6.4. This is still a new install, so not concerned with loss of any data. I am now booting into a 3.6 TB storage area which is 4 1TB drives. I want it to be +-2TB active as Raid 10. How do I setup the software Raid before installing CentOS 6.4?