Old HP Xeon Server Blade With Only SCSI HDD Ports & CentOS

Home » CentOS » Old HP Xeon Server Blade With Only SCSI HDD Ports & CentOS
CentOS 19 Comments

Fernando,

That sounds like a cool project. As has been mentioned by several in the thread it’s going to be a bit of a challenge to get it running. If the beast has 8 slots, and could draw a total max of 64A (yes, I
rounded up, for a reason!) or so at -48V nominal, then each blade is going to draw roughly 8A at -48V (let’s round that to -50 for easy calculations) or a max of about 400W per blade. So a 400W telco power supply is going to at most run one blade at max draw, and it could possibly run two blades with small drives.

Having said that, Ultra320 drives aren’t too hard to find, but since you’re in .AR you might be looking at international shipping. Now, it depends on the exact model of Xeon as to whether 64 bit will work or not, and even if it does you’re more likely to find that 32-bit runs better, and CentOS 5 runs better than 6. One of my main dev/test server boxes is running RHEL 6 32-bit on dual 2.8GHz Xeons of the previous generation, and it makes a fine testbed/try-this-out server. And, yes, I do know virtualization of things is the ‘Way to Go These Days (TM)’.
Well, unless you need a parallel port for an offbeat device controller for an astronomical instrument (IEEE-1284 CAMAC Crate, anyone? What about a SCSI CAMAC Crate (we have three)). Or you’re doing other things where virtualization is simply not the right thing to do.

Now, if you’re in the ‘experimenting’ mood you might look at what it would take to adapt something like http://shop.codesrc.com/index.php?route=product/product&pathY&product_idP
(a 50-pin narrow SCSI to SD flash card board) to LVD UW.

While this box doesn’t qualify as ‘vintage’ yet, if you want to see the lengths to which some people will go you need to go lurk a while on the vintage-computer.com forums; there are people trying to do things as
‘interesting’ as rebuilding an original PDP/8 (a ‘straight 8’) from scratch with just a collection of flip-chips, a blank wirewrap backplane, a vintage enclosure, and a set of schematics (and more time on their hands that I have!). So what you’re wanting to do, if you have the time and it’s more for hobby purposes (or development purposes, even), is nowhere near as far-fetched as some of the things I’ve been reading lately. (Long story, and way OT).

Now, if I may rant just a bit.

Not everyone on this list is here for professional reasons. I am; but many are not. Many people run CentOS because it’s just plain fun to run it on various and sundry ‘cast-off’ hardware. Fernando has been around for a while; he’s not a newbie. He has a new (to him) toy, and wants to make it work. Telling him ‘that’s too old to be useful’ is useless.

In my position with this not-for-profit astronomical observatory, I get this type of answer way too many times: ‘You have a VAXstation 4000/90
and you need $off_the_wall_software? Why do you want that old thing?
You need $insert_computer_flavor_of_the_day and our new
$super_duper_alley_ooper_ka-ching_product! Forget that old stuff!’ Yes, we really do have a runnable VS4000/90 (two of them, in fact), and yes it did something important (which is why the software was needed), and no it can’t be replaced by something newer without multiple-tens-of-kilobucks worth of new controls hardware and more than that in software development, so can I just get my question answered, please? (Thankfully, we are building something newer thanks to crowdfunding, but it’s not yet operational). (And just in case you think you might have that part, no, you don’t, unless you had access to internal development information at a particular CAMAC Crate vendor who never supported SCSI CAMAC on VMS later than 5.2, and we needed to at least try to upgrade to OpenVMS 7.3 on VAX or maybe even OpenVMS 8 on Alpha (too much critical (and time-tested, certified) Fortran code to go to anything other than VMS)).

Sorry for the rant.

19 thoughts on - Old HP Xeon Server Blade With Only SCSI HDD Ports & CentOS

  • And even more sorry that I didn’t make it clear that the rant was directed at no-one in particular, but just out there on the list, and definitely not directed at Fernando.

  • Thanks for the pointer Lamar.

    That is waaaaay too much for me. Yet, if I run across a Commodore 64 in a dumpster, I’ll surely open its guts and take its SI6581 sound chip for what I wanted all my childhood and couldn’t have: a “Stereo SID” cartridge. ;)

    Of the 8bit years I’ve got a working C128D, and of the 16-bit age an Amiga
    1200 (it works, but it’s not hooked up, lost the scandoubler and hence it’s impossible to display its low-refresh modes in today’s VGA monitors). I’ve also got a working 4-way SMP monster, the ALR Quad6 (4x Intel Pentium Pro 200Mhz) w 256MB RAM. But that thing is really a toaster.

    So what you’re wanting to do, if you have Basically I never had anything to do with blades, and since I have never worked for big corporations (nor I plan to, unless one of my favorite tech firms decide to hire me, and there’s only 2 in that category ;), this was/is the only opportunity of being up close with Blade technology.

    It was fun already learning how it all works, and the hardware design. For instance I thought before that the blades could operate by themselves even with ethernet ports but that’s not entirely true, without the interconnect backplane and the comms blade (which I also found I have) that provides the Ethernet ports, it’s crippled.

    Back to topic.

    It seems the only stumbling block for me so far is
    1. Finding a 48V power supply
    2. Learning the right polarity on the connectors in the back of the chassis so I don’t burn things down
    3. Dealing with the UWSCSI issue, either by finding a cheap used UWSCSI
    drive or buying an adapter (last night while searching about this, I
    stumbled upon one Florida,USA store selling one UWSCSI to SATA adapter for like $75, instead of the usual $150-$170 that is the minimum price you can find on eBay or amazon).

    In the end, if this exercise ends up getting nowhere. I’m gonna put the blades for sale at a local auctions site, but first taking out the 8GB of DDR2 ECC RAM which I can use in my Sun AMD Opteron box…

    Thanks for the help -you and everyone who replied-, have a nice weekend!.

    FC

    During times of Universal Deceit, telling the truth becomes a revolutionary act Durante épocas de Engaño Universal, decir la verdad se convierte en un Acto Revolucionario
    – George Orwell

  • Why would it uhh ave been tossed in the first place?…..I’m assuming SOMETHING was amiss…..and forced the trashing of this equip.

    —– Reply message —

  • Oh…..well that explains it! LoL! A newly minted admin who doesn’t see the “potential” of some H/W because he wasn’t trained on it…..so he thinks its too old to serve a real purpose in the modern world…..LoL!

    —– Reply message —

  • Hmmmm…..guess everyone’s definition of fun is different?…..LoL!

    EGO II

    —– Reply message —

  • (If this double posts, my apologies. The first one was sent from the wrong address and I’m not sure it went through)

    How much current do you need? I bet I could find you one (if it’s not a ridiculous amount). There’s a surplus place here in the Portland area that has all manner of marginally useful power supplies.

    –Russell

  • I imagine it was something like: UW SCSI devices are nowhere to be found on the local market -which I’ve been able to confirm by doing a simple search on local auctions site and eBay wannabe mercadolibre-.

    Probably their HDDs died and they asked for a quote to the local HP branch and found the price quoted unacceptable (surely the local HP branch is stocked on those drives, but being scarce and expensive to import on demand, they surely save them to high-value customers with hardware maintenance and support contracts… so if you’re one of those high value customers, surely you get replacement UWSCSI drives, if you’re a tiny shop maybe you’re told “sorry, we don’t have any”). But I’m just guessing, painting a possible scenario.

    Plus, importing used ones from eBay, which would have been the normal route in this case, is, well, increasingly difficult down here. It´s hard to explain the rationale of the irrational =>
    http://www.bbc.com/news/world-latin-america-25836208

    Or perhaps someone got a kickback for buying a new server so the old one NEEDED to be deemed obsolete. The awful truth about the realities of this world. Follow the money. A third possibility is “new admin doesn’t have a clue about the old kit, so he wants to get rid of it so he can buy something simpler he does understand, and since he’s the “expert” in charge and nobody knows better -read the book “Peter’s principle”-, they take his word as gospel.”

    Of course I’m not accusing anyone of anything, just thinking aloud of possible scenarios :).

    I’m just happy of finding some nice kit for $0. ;)

    Dumpster diving is a common practice, I do it often. I’ve been able to find nice stepper motors and power supplies from tossed inkjet and laser printers, for instance.

    Dumpster Diving: beware, it’s an addiction ;)
    https://www.youtube.com/watch?v=EXg5MdYMbHs FC

    During times of Universal Deceit, telling the truth becomes a revolutionary act
    – George Orwell

  • Yes, just a bit.

    If you were in the US I could hunt it down for you, but I must say, in my best “minister from Blazing Saddles” voice:

    Son, you’re on your own.

    Unless you’re willing to pay for shipping. :)

    –Russell

  • I’ve retired all the older xeon “P4” class hardware from my development lab as its increasingly unreliable as it gets older than 5 years old.
    a huge 6000 watt chassis of 8 dual single core servers with 8gb max ram each can *easily* be replaced with a single 1U or 2U server with dual 8
    core processors and 128gb ram and vmware or whatever.

  • Dual 8 core? Try dual 10 core (40 virtual cores) with 384G RAM and up to 32T of disk space in 2U. And that’s just the server-of-the-line Dell stuff.

    And every time the server lines get refreshed, the servers get more and more capable.

    … and take longer to boot. Sigh.

    –Russell

  • Thank you John, that is what I think all the time. But as it is a play-project where he tries to get that trash running at all (…)

    btw, I looked it up for you guys, my toaster has 1,2KW, my water-cooker was the one with 2,2KW ;-)

    In the HP-blade-configurator, one blade-server that you described has 141W, but 2 with an enclosure with 2 PDU needs 625W AC-in max. I am sure you can expect real power-consumption far below that.

    mit freundlichen Gr

  • I’m not bothered – I agree. If what you have meets your needs, why upgrade?
    And a *lot* of folks, not just at home, do not have a budget for NewK3wlStuph!!! (Which is how I feel about fedora….)

    We’ve got servers that are 5+ years old, including a once-supercomputer from SGI that’s from, I think, ’03. And if *anyone* thinks we need to get rid of it, they can contact me offlist, to arrange, from their pocket, a donation to the civilian sector of the US federal gov’t…..

    mark

  • Heh, SGI Altix….. Got two, formerly used for weather modelling. I
    have successfully rebuilt up to CentOS 5.9 (I haven’t been able to justify the work yet to get to 5.10, but over the summer perhaps) on our
    30-CPU SGI Altix 350 system, and it’s running on a small 4 CPU Altix
    3700 (we have another 3700 with 20 CPU’s, but it has a hardware issue somewhere). RH support IA-64 in RHEL 5, so if you have an RH contract you can run straight RHEL5 on it.

    My newest servers are by most standards fairly old these days; a pair of Dell PE 6950’s and a scattering of PE 1950’s. But they’re solid, and they do the job.

  • Lamar Owen wrote:

    *ping*

    SGI Altix 3000 here. There was some reason – support, I believe, from SGI, that it’s running SuSE 10. And the main users have collaborators around the world, some of whom are on *older* systems due to budget or export regs. But it’s still test run occasionally to model protein folding….
    (But I didn’t say where I work, since I don’t speak for the Institutes, the agency, nor my employer (a federal contractor)….

    Oh. START THE PROCESS to budget for replacements of the PE 1950’s. Start it last week. We replaced all ours a couple of years ago – inside of a month, we had something like 4? 6? of them have the RAID daughterboard croak (talk about quality control!), and they’re years out of support, anyway. R410 or R610’s’ll do you.

    mark

  • So has anyone done a cost analysis on the point where it is an overall win to replace those old power hogs even if they still work if you consider several years of savings on power/AC/space/time and maybe the possibility of replacing at least 4 old clunkers with VMs on a single new box? Or does no one look at the big picture – like the IRS
    ending up paying Microsoft for ‘custom’ support of XP that they’ve known for years had to go?

  • Hi Les,

    we did that this winter in another floor and replaced a complete server-room, containing several Racks full of 1850/1950/2850/2950/R310, except the least all with 2 or more local SCSI-disks and dual power-supply.

    All that now runs in a virtualized environment on 2+1 R720; we now have 3
    R720 replacing 4 racks in another room. This means no air-conditioning, UPS, switches there. Allover it is an average of 31KW lower power-consumption.

    Now calculating the hardware cost: 60K EUR for servers, storage-upgrade and licenses, and saving 262MWh or 52K EUR per year. So after 14 month of operation the energy-saving pays the hardware. Not speaking of the allover TCO including lower staff-costs and maintenance for the bigger amount of hardware now off.

    mit freundlichen Gr

  • Yes, I have.

    A 1950 fully loaded with 2x SAS drives will pull about 350W or so, max.
    After applying a max 1.4 PUE (our PUE, of course, varies with the season) that gives about $275 per year per box. Since the 1950’s were donated to us as NOS, and they are quite capable machines for basically any workload we have other than what the Altix boxen would do (say what you want; 30 1.5GHz IA-64’s with optimized code and 54GB of RAM will crunch numbers quite well, and costs only 30 cents per hour to run (we don’t run it 24×7, but only as needed and in the winter, when it helps heat things)), it isn’t cost-effective for us to replace them at the moment. The EMC Clariions that they’re connected to take far more power than that, running around 3KVA per rack ($2,500 or so per year per rack). But that cost would be there with a beefier virtualization box, too. In our particular case, the power cost for storage swamps out the power costs for the servers.

    I’m looking at putting oVirt on a few of the 1950’s and pulling things from VMware on the 6950’s over to them, and retire the 6950’s, which have run continuously for the last 7 years.

    We’re a 501(c)(3) and I’d be glad to take your donation of something better, of course. :-)