OEM Suggestions

Home » CentOS » OEM Suggestions
CentOS 36 Comments

Hey, folks,

I’m working on finding some new compute nodes. What I’m looking for is a 64 core box, with room enough for a lot of RAM. I can get it from Dell, or HP (bleah! a 4U box), but I need to have three quotes, y’know. We’ve gotten a lot from Penguin in the past, but they’re all Supermicro, and we’ve had a *lot* of problems with the 64 core boxes, so I’m looking for another vendor.

Suggestions? Recommendations? People to run the other way from?
(Sun/Oracle is down there *under* “none of the above”).

mark

36 thoughts on - OEM Suggestions

  • Whats wrong with the SuperMicro 64 core boxes? I have never seen any problems with them. I assume you mean AMD, in which case the IPMI sucks but other than that…..

    Dell PowerEdge R815’s are pretty ok.

    Cheers,

    Andrew

  • Andrew Holway wrote:

    They’re crap. We’ve had to send a number of them – we’ve got something like a couple dozen of them, and at least 4 of them had to be sent back, and got a m/b replacement after they verified, with our test, why they crashed, and one or two got sent back *twice*.

    I agree. But I do need a third quote, though I suppose I could get a reseller along with Dell and the HP. I was sort of looking for anther vendor that’s got 64 cores in 1U (the Dell’s 2U).

    mark

  • If you are looking for density, shouldn’t you be using blade systems?
    I think both HP and Dell have blades that can go to 64 cores.

  • I suspect it’s total cores per machine + density; I dunno if HP/Dell/anyone else can do that sort of density on cores with blades. I have some IBM 3850s that seem to get close (and I’d gladly give them to all sorts of people that I greatly dislike)

    if blades are an option, Cisco UCS blades seems to work quite well for us, but I don’t think they get much over the dual proc hexacore range.

  • zep wrote:

    Let me clarify a bit. In the past, we’ve gotten servers from Penguin, as I
    said, and they’re all Supermicro boxes. I’m not just looking for three quotes – that, I could get easily – I’m looking for other OEMs, in a similar price range (or lower) that might offer us a replacment OEM for Penguin, that offers a) a good price; b) *reliable* systems that they’ve q/a’d, and c) decent technical support response.

    On the last, Dell’s hard to beat. Sun/Oracle ranks under “none of the above”…. So….

    mark

  • Andrew Holway wrote:

    Look nice… but trying to google someone selling *servers* with the board isn’t getting me anywhere. I happened across the Asus RS927, but that, after much looking, turns out to only do 32 cores max. Got a clue on models of *server*, from anyone, that uses the Intel board?

    mark

  • John R Pierce wrote:
    The only rackmounts I see from their site are dual processor. I’ll need four – mostly, I see 16-core CPUs in our systems, though some have 12-core
    (or the weird 10-core). The Asus I saw takes 4 processor, but only 8-core ones.

  • John R Pierce wrote:

    As a comparison, the Penguins that we got a couple-three years ago were in the $11k range, and were 1U Supermicros. The Dell and HP I’m looking at are about $13k, so that’s around where I’m looking.

    Does anyone have any preferred *vendors* – as I say, we used to like Penguin. Silicon Mechanics? AVADirect? I’m just throwing out names here I’ve run into while googling.

    mark

  • Andrew Holway wrote:

    The budget’s limited – maybe one, maybe two, but next year, possibly more. As a comparison, I’ve got a cluster with a head node and 22 compute nodes:
    one 12 core, 10 or 11 48 core, and the rest 64 core…. This is for serious HPC.

    mark

  • Doesn’t that stuff parallelise? Quad socket boxes are pretty rare in HPC
    nowadays as the amount of memory available in a dual socket box is so high.

    Infiniband?

  • Andrew Holway wrote:

    Not on this. But the code’s written with parallelization – it uses torque, a std. package, descended from beowolf clusters.

    mark

  • HP blades pop up with a list price around $12k. If you need enough to fill a chassis (and can get a discount), I’d think that would come out in the same neighborhood.

  • the blade chassis infrastructure is expensive. for supercomputing, the cheap cloud tray computers are probably much more cost effective.
    bunches of folks, HP and Dell included, are making server trays that have 2 or more complete nodes per 2U chassis, much cheaper than traditional blades, like the Dell C microservers.

    I *am* kind of surprised at the 64 core per node thing, single large nodes like that are more typically used as enterprise database servers where you have 100s/1000s of clients doing SQL queries concurrently…
    What the geophysicists at my son’s U are using for seismic processing and such are racks of 2 socket machines with a pair of NVidia Cuda processors each to do the numeric heavy lifting.

    if those 64 processor servers are like $14000 or whatever, I think I’d buy 18 of these instead :D
    http://www.ebay.com/itm/HP-ProLiant-DL180-G6-2U-2X-XEON-HC-X5650-2-66GHz-12xTRAYS-24GB-P410-RAID-512MB-/261448081851
    (ok, that particular chassis is setup as a storage server, with 14 drive bays on a raid card, but you can find lots of similar things in various configurations)

  • We switched from HP to Fujitsu a couple of years ago, and couldn’t be happier. Look into their RX line, I think the RX500 and RX900 (iirc) do
    4 and 8 socket.

    digimer

  • >>>
    >>> I’m working on finding some new compute nodes. What I’m looking for is
    >>> a 64 core box, with room enough for a lot of RAM. I can get it from
    >>> Dell, or HP (bleah! a 4U box), but I need to have three quotes, y’know.
    >>> We’ve gotten a lot from Penguin in the past, but they’re all
    >>> Supermicro, and we’ve had a *lot* of problems with the 64 core boxes,
    >>> so I’m looking for another vendor.
    >>>
    >>> Suggestions? Recommendations? People to run the other way from?
    >>> (Sun/Oracle is down there *under* “none of the above”).
    > I’d also recommend Fujitsu. If you’re desperate there is the SGI H2106-G7
    > in 2U size.

    I’ll check that out. We *do* have an SGI… a UV 2000.

    mark

  • bingo. I think too many people buy ‘whitebox’ supermicro stuff direct and self-integrate, then are surprised when there are issues.
    Integration needs to include testing. All that integration and testing is why brands like HP are more expensive, you can usually assume its going to work.

  • try this…

    # dmidecode -t 1,2
    … Handle 0x0001, DMI type 1, 27 bytes System Information
    Manufacturer: SGI.COM
    Product Name: ISS3500
    Version: ISServer
    Serial Number: Yxxxxxx
    UUID: (big messy string)
    Wake-up Type: Power Switch
    SKU Number: To Be Filled By O.E.M.
    Family: 1234567890

    Handle 0x0002, DMI type 2, 15 bytes Base Board Information
    Manufacturer: Supermicro
    Product Name: X8DTE-F
    Version: 1234567890
    Serial Number: VM1ASxxxxxxxxx
    Asset Tag: 1234567890
    Features:
    Board is a hosting board
    Board is replaceable
    Location In Chassis: To Be Filled By O.E.M.
    Chassis Handle: 0x0003
    Type: Motherboard
    Contained Object Handles: 0

  • Am 30.05.2014 um 19:34 schrieb John R Pierce :

    True. The thing I hate about HP is that their SSD offerings are IMO a joke.

    Not only are they several times as expensive as an equivalent Intel SSD (even taking into account that we don

  • hmm?

    The HP H220, H221, H220 are SAS2 HBA”s. also the S08e but thats older, and was only sold to support a specific P2000g3 array. AFAIK, the H22x are LSI 2008 based (9211-xx)

LEAVE A COMMENT