CentOS 6, Mptfusion Software?

Home » CentOS » CentOS 6, Mptfusion Software?
CentOS 7 Comments

Hi, folks,

Got an older Dell R410, with an LSI 1068E PCI-Express Fusion-MPT SAS
(rev 08). It *appears* that a) trying MegaRaid, and b) from what I’m googling, that what I need are mptfusion-related packages. Unfortunately, yum shows me nothing available in base, epel, or rpmfusion. Am I looking for the wrong thing, or does anyone have a source (no, I haven’t looked at LSI, sorry Avago’s website, which I
don’t find overly friendly…)


7 thoughts on - CentOS 6, Mptfusion Software?

  • Hi Mark,

    I happen to have Dell R410 that runs CentOS 6. Here is what I’ve got (I’m obliterating hostname below):

    # uname -a Linux *******.uchicago.edu 2.6.32-642.1.1.el6.x86_64 #1 SMP Tue May 31
    21:57:07 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux

    # lspci
    02:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E
    PCI-Express Fusion-MPT SAS (rev 08)

    # lsmod mptsas 51992 9
    mptscsih 36638 1 mptsas mptbase 93647 2 mptsas,mptscsih scsi_transport_sas 35588 1 mptsas

    # find /lib/modules/2.6.32-642.1.1.el6.x86_64 -name mptsas\*

    # rpm -qf
    /lib/modules/2.6.32-642.1.1.el6.x86_64/kernel/drivers/message/fusion/mptsas.ko kernel-2.6.32-642.1.1.el6.x86_64

    As you see, all works for me with stock kernel, no need to fiddle with anything.

    I hope, this helps.


    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247

  • wait, a 1068E is not a megaraid, thats a HBA (host bus adapter) with optional very limited raid in firmware (typically raid0, 1, 10 only).
    Those have two firmware sets, IR or IT, if your card has the IR
    firmware, do yourself a favor, find the “IT” Firmware on Avago’s webpile, and reflash the card with it, and now its a straight SAS card, your disks are seen as native SAS drives, and you can use linux native mdraid on it (and/or LVM or whatever). I believe the 1068E was used on the SAS3081/3082 cards

    thats an older SAS1 card, and has 2TB disk limits, I believe, which can’t readily be circumvented.

    Confusing, but to flash these, you need to get the SAS3081ER_Package_P21_IR_IT_Firmware_BIOS_for_MSDOS_Windows, unzip it, and locate the firmware and BIOS files for IT mode, then get the flash2sas utility for Linux to actually flash the files. or flash it using MSDOS (freedos on a usb stick) or using EFI shell (if your system is so endowed).

    to find this stuff, go here, http://www.avagotech.com/support/download-search select “Legacy Host Bus Adapters”, and LSI SAS 3081E-R, and Search…

  • Hey, Valeri,

    Valeri Galtsev wrote:

    Ok, I can legitimately claim to being sick (home yesterday, seeing a doc), so that’s my excuse for not finishing what I was looking for: I want something like MegaRAID that will understand the drives, so that I can figure out which drive just failed and replace it. There’s no blinkinlights, no clue… and based on the two that were *not* light, but were mounted, it’s one of the two on the left that *are* lit, and I’m assuming RAID 1 here.



  • Hi, John,

    John R Pierce wrote:

    Ahhh! Thanks, that’s a useful bit of info. I’ll mention it to my manager. However, much more important is finding something that will tell me *which* drive in a RAID just failed so I can replace it….


  • In worst case scenario, you can reboot the machine and upon boot use key combination to go into controller BIOS settings, there you should be able to see which drive failed (Ctrl + H or similar, it will tell you which keys). I doubt Dell tweaked that away from what LSI has, even though LSI
    firmware _is_ tweaked by Dell. You can trust that Dell counts drives from left to right (wen you look at machine front panel). I don’t remember client utility name that will give you access to this information when system is running (interface of which is obscure to avoid saying nastier words, so it is easy to royally screw up in it… that’s why I love 3ware
    – which has passed away, alas).


    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247

  • thats something thats remained a deep dark secret in the linux (and generic unix) world, ‘left as an exercise to the reader’. there’s no standard for mapping those SAS (or SCSI) backplane lights to specific drives, and my general experience is the lights only work right with brand name systems using their own brand name proprietary raid piles.
    there’s a sas/scsi control command (it escapes me at the moment) which will turn on and off the backplane lights, but there’s no standard glue for connecting this to the drive failure events. A quick batch of googling suggests sas2ircu (LSI proprietary?), and ledmon
    (https://sourceforge.net/projects/ledmon/) are worth investigating.

    I’ve printed labels with the partial WWN of the drives and stick them on each hot swap tray, and identified the failed drive via those. to verify, I’ll do something like a dd if=/dev/mdX of=/dev/null bse536, to make all the lights of the working drives blink as fast as possible, and verify the one I think I want to replace is the one thats not blinking

  • Usually, the info panel on the front will tell you, or Dell’s OMSA tools. I use the nagios_check_openmanage plugin to tell me these kind of things.