Motherboard And Chipset Compatibility

Home » CentOS » Motherboard And Chipset Compatibility
CentOS 19 Comments

So, having returned from a month’s vacation, I’m back to work on attempting to build a set of small form factor CentOS compatible computers. I’ve really tried to do my homework, but this doesn’t appear (at first glance)
to be at all easy. It’s not made easier by the fact that I have to get it right the first time (and I haven’t built a PC in a decade); the time and money cost of shipping anything to and from my remote location in Chile means I can’t afford to waste time buying and returning things.

First question: does anyone have any experience with the Jetway NF9E-Q77 or ZOTAC Z77ITX-A-E motherboards? Having struck out on Intel Q77 or Z77-based SFF motherboards (the DQ77** series is completely out of stock everywhere, and the DZ77** series is ATX only), I have found a couple of Mini-ITX
systems based on these two motherboards.

Second question: Where can I get information about which Intel chipsets
(Z77 vs Z87 vs Q77 vs C602 vs …geez, there are a LOT of chipsets, as evidenced by http://www.supermicro.com/support/faqs/os.cfm) are supported by CentOS 6 / RHEL 6? I have not been able to find this information on either the Intel, RedHat, or CentOS web sites.

Third (more general) question: My requirements are (I believe) modest:
* 1U short-depth rackmount chassis OR Mini-ITX small-footprint chassis
* Dual GbE network ports
* Dual 1920×1200 monitor display
* One SSD drive
* 32-bit CentOS 6.4 compatible.

It’s the combination of the first, third, and fifth requirements that really seems to get me hung up. I’ve found plenty of 1U server systems
(such as SuperMicro), but none of them support dual displays. (Some of them have a PCIe16x riser card that could conceivably accomodate a separate graphics card, assuming I could find one that fits; I have Emails in to various tech supports to inquire about this. I’ve found LOTS of 2U
solutions, thanks, but only have 1U of available rack.) As far as Linux support goes, the RHEL Hardware List has thus far been pretty useless (much of the hardware on it is obsolete or discontinued), and most manufacturers’
web sites have been equally useless. (One exception being ASUS, which has a Linux-compatibility list at http://www.asus.com/websites/global/aboutasus/OS/Linux.pdf SuperMicro has a very nice list referenced above, but none of their small form factor motherboards support dual displays AFAICT; I have found nothing useful at Intel’s site.)

Does anyone have any resources they’d like to point me to?

Thanks,
-G.

19 thoughts on - Motherboard And Chipset Compatibility

  • Glenn Eychaner wrote:
    computers. I’ve Z77-based

    VERY STRONG RECOMMENDATION: DON’T buy Supermicro. They have a *lot* of trouble with this new, fuzzy concept called “quality control”.

    For example, we have a cluster with 21 Penguin servers, about half with 48
    cores, and the rest with 64 cores. You’d think this kind of hot, high end server would call for a lot of attention.

    No. We’ve sent back to Penguin at *least* 5 or 6, and a couple of those went back *twice*, and almost all had m/b’s replaced, and one a CPU, I
    think. That’s an absurdly high percentage….

    Now, about what you’re looking to build – you say that you want 1U, and mention rackspace: in my experience, rackmounts are a *lot* larger than a pizza box, so I’m a little confused at the requirements you’re building for.

    mark

  • those two requirements together are unusual. most rackmount 1U systems are headless, except a basic VGA for initial configuration.

    dual display is generally found on a desktop system.

  • m.roth at 5-cent.us wrote:

    The rack is already full; I only get that 1U of space by removing a spare part to another location, and unfortunately, I have a depth limit due to the power distribution module on the rack rear. These computers are replacing tower PCs that sit on the floor under a desk in a rather hostile environment, so I’d like to move them to either the desktop or the adjacent rack, but have limited space in either location (1U of short-depth rack or about room for a miniITX box on the desk).

    -G.

  • John R Pierce wrote:

    I agree. In this case, the floor is not the best environment for the equipment, the adjacent rack has only 1U of short-depth rack space available, and the desktop is already crowded with keyboards and monitors.

    Since the reqirements are (relatively) modest (except those two), I was hoping to squeeze something in.

    Looks like I’m out of luck, and buying another full tower to hold a motherboard, a disk drive, and one expansion card.

    Sigh.
    -G.

  • Have you considered “Desktop” type of cases? You could maybe place them bellow the existing Desktops, to conserve the horizontal space.

    You can also think about Building a “beast” system that would run original CentOS and one or more guest systems (CentOS, Windows, whatever). If you use KVM Virtualization, and buy MB with “IOMMU” BIOS
    option
    (https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardware), you could pass PCI devices (second graphics card, telemetry PCI card, etc)
    to guest system, thus making current systems obsolete.

    So far I have only heard about IOMMU and PCI passthrough, so do not hold me to my words, but they say it works.

  • Depending on the performance requirements and what other systems might drive a display or VM, you might make something work with a VM hosting the applications and a remote X display (or 2), Freenx, NX are good for that and work cross-platform.

  • I have a single Zotac ZBOXHD-ID11 mounted in a mobile cart for driving an ancient projector for small classroom learning environment. Not sure motherboard designation but CentOS 6.4 upgraded from 6.3 plays very nicely. This is a 64 bit system. Not much help and I am having difficulty seeing all your requirements being met in an ITX form factor. FWIW, Intel N10/ICH7 chipset, atom quad core CPU.

    Cheers.

  • how about an ultrasmall form factor desktop, such as the Dell Optiplex
    7010 USFF ? those have dual displayport outputs (requires $7 optional video output panel), and are 24×6.5x24cm

  • I was going to recommend the Optiplex 7010 as well. I run 32-bit CentOS 6.4
    on about 4 dozen of these systems here at work. The price in the US is less than $800, and the dual DisplayPort outputs easily drive two 1920×1200 LCD
    monitors (a very common configuration here at work).

    Alfred

  • [snip]

    For the display configuration, do you need to run any graphics-intensive software? If not, I have seen some devices that act as miniature broadcast devices. The monitors don’t need to be physically attached to the system unit. They do need some sort of wireless access to the server though. They are useful for monitoring stations, electronic signage, etc.., but not so good for fast updates
    (i.e., no games, videos would probably be degraded).

  • We have a *lot* of SuperMicro based systems in the field, and they aren’t failing. In fact, I can’t remember the last time we had to fix an actual motherboard issue. It seems like every field hardware failure for years has come down to dying HDDs.

    We did once upon a time have a QC problem with SuperMicro, around Y2K, but that was because we chose to use AMD processors, and AMD OEM
    fan/heat sink combos at the time used little 60mm 6000 RPM pancake fans that would seize up after a few years. This was before processors had overtemp shutdown features, so once the fan seized, the processors would cook themselves.

    You can’t really lay that one at SuperMicro’s feet. AMD screwed up.

    The real fix was switching back to Intel processors, which shipped with bigger and slower-moving fans, which lasted longer.

    You’ll notice that both of these failure modes are due to mechanical wear. I can’t say I’ve *ever* seen a SuperMicro board fail in any of the solid-state components, solder joints, capacitors, etc.

  • Glenn Eychaner wrote:

    Ok, here’s a suggestion: just get the miniITX, and use a desktop with a dual display – you could SSH in (what I have here). The other option… have you considered using a KVM to eliminate some of the clutter on the rack? Or use two of the monitors on the rack for this?

    mark

  • Warren Young wrote:

    Well, *all* of these are rackmount servers, with no moving-the-server wear. We start seeing userspace compute-intensive processes crashing the system a number of times a day. We have a canned package that we send to Penguin on the disk we put in, which has a generic CentOS install, and running that, the crash is repeatable. They replace the m/b, and it doesn’t happen again. (Or at least with that program – we’ve got issues with some *other* users, with different software, that seem to be crashing it. With us, this is seriously important, since the users’ jobs run for days, sometimes a week or more, on the cluster….

    mark

  • Same here. We have several racks full if Supermicro systems and never had any issues with them.

    Regards,
    Dennis

  • I’ve been following GPU passthrough with KVM casually for a while, testing andon stacks from EL6 up to Fedora Rawhide. Passthrough on other devices work great – you loose guest migration ability, of course – for everything
    *except * graphics devices. I would not consider this a viable option.

    –Pete

  • I didn’t even know that the Optiplex 7010 was CentOS compatible (though someone may have mentioned it in my previous thread); it is not on the RedHat Hardware List, not does Dell’s web site go out of its way to mention it. Again, how does one find this kind of thing out? There has to be a better solution than 3 days of web searches, Emails to tech support, and forum posts.

    In addition, the USFF Optiplex seems to be limited to a Core i3 processor and a mere 2GB of memory, which while acceptable is not optimal (and worse than some other solutions I’m looking at).

    And for everyone suggesting KVMs, VMs, SSH, or other solutions…this is a telescope operations system, so none of those are really appropriate to the task, I’m afraid. I really want direct monitor/keyboard/mouse connections
    (and yes, I keep a hotspare warmed up at all times in case of a critical failure, and have had to use it on more than one occasion).

    And I’m sorry my postings don’t seem to thread right in the archives. I
    subscribe to the Digest form orf the list and am compiling these replies using the web archives.

    Anyway,
    -G.

  • The Optiplex 7010 comes in various form factors. The one I use is aprox
    14x16x4″ and contains an Intel(R) Core(TM) i5-3470 CPU @ 3.20GHz with 8GB
    of memory (and I think it supports 16GB with 4 4GB DIMMs.).

    Alfred

  • Our servers are all rack-mounted, too, and pretty much never get moved after being installed.

    In any case, I was referring to wear in the electromechanical components of a server. HDDs and fans, primarily. In olden days, optical disks, too. These are expected to fail over time.

    Define “crash the system”.

    Hard lock-up, requiring a power toggle or Reset press?

    Server unresponsive to keyboard, except for Ctrl-Alt-Del?

    Kernel panic?

    X11 unresponsive but you can still SSH in?

    User program dies mysteriously, but other programs still run?

    Keyboard lights blink in patterns, monitor won’t wake on mouse wiggle?

    Box reboots spontaneously?

    BIOS beeps?

    I don’t suppose you’ve gathered continuous temp data, say with Cacti?

    Okay, so either this one motherboard product from Supermicro has a QC
    problem, or Penguin has an application or design problem with it. Or, your environment is somehow pushing them past their design limits.
    (e.g. insufficient cooling)

    You’re painting with far too broad a brush here to say Supermicro is bad, period.

  • The whole system reboots.

    No, I haven’t. It’s a thought, thought the HVACs good (too good, he says, when he needs a long sleeved shirt, and sometimes a sweater). ipmitool sel list isn’t showing a problem.

    Oh, except for the one or two that we sent back a *second* time, and they replaced the m/b again….

    That’s certainly not the problem.

    You like them, fine. We really don’t, and the only thing that we were buying that had their m/b, etc, were honkin’ hot severs.

    mark

LEAVE A COMMENT