Disaster Recovery Recommendations

Home » CentOS » Disaster Recovery Recommendations
CentOS 9 Comments

Greetings,

I have three drives; they are all SATA Seagate Barracudas; two are 500GB; the third is a 2TB.

I don’t have a clear reason why they have failed (possibly due to a deep, off-brand, flakey mobo; but it’s still inconclusive, but I would like to find a disaster recovery service that can hopefully recover the data.

Much thanks for any and all suggestions,

Max Pyziur pyz@brama.com

9 thoughts on - Disaster Recovery Recommendations

  • Sysrescue cd. If the drives are still viable and you have a spare beater system handy the data rescue should be straight forward. Done it several times. HIH.

    Fred Roller

  • The two 500GB drives prevent the machine from starting (no boot, no lights, zip); the 2TB can be hooked up, and box runs; but the 2TB is not visible.

    So I think that I need a service; someone mentioned that this is a function of geography, so I’m in NYC, if that helps.

    MP

  • The first thing I would try to do (as someone has already suggested) is:
    put drives one at a time into USB enclosure or USB to SATA adapter, and try to mount them on sane machine. If you can access drives (you may need to run fsck before everything), check dmesg to see if the drives produce errors every so often, check SMART (google is your friend). And if there is suspicion the drive may die soon, copy its content elsewhere.

    If the drives are indeed dead (say, other drives in the enclosure work, these don’t), you may indeed need to contact professional recovery services. They are expensive. In USA and Canada the recovery may cost over
    $800. If you have to do things this way (say, it were your family photos, never backed up) and you are in USA or in Canada, ask me off the list, I’ll give you references of companies I know are good. They were used by people I know personally, so you can trust (I do) to what I’ve heard about them. I do not have my own experience with professional data recovery services: my plan is: I have a good backup.

    If you are going to look for recovery services yourself, I would suggest:

    1. stay away from those who charge “evaluation fee” – these are the guys who likely can only solve trivial cases, which you can solve yourself.

    2. Use only well known companies (if you don’t value your data, just forget the whole thing)

    3. The companies I would trust are those who only charge you if they successfully recovered your data. They live off actual result, that means they _can_ do it. That said, one shouldn’t expect 100% recovery (but it is often almost 100% indeed), but score like 90% recovery will be very good in my book.

    Good luck!

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • If you can get them mounted on a different machine, other than the one with the problem mother board, then I suggest giving SpinRite a try.

    https://www.grc.com/sr/spinrite.htm

    It’s inexpensive which makes it a low risk and not much of a loss if it doesn’t work.

    Also consider this a lesson learned. The cost of a second low capacity machine, including the electric bill to run it, is insignificant compared to paying for data recovery.

    http://www.tigerdirect.com/applications/SearchTools/item-details.asp?EdpNo=7841915&Sku=J001-10169

    If you insist on keeping personal control of your data, like I do, then that is the best way to go about it. Use the second machine as your backup. Set it up as a NAS device and use rsync to keep your data backed up. If you’re paranoid you could even locate the old clunker off site at a family/friend’s home and connect to it using SSH over the internet.

    Your other option is to use a cloud storage service of some kind. Be sure to encrypt anything you store on the cloud on your machine first, before you send it to the cloud, so that your data will be secure even if someone hacks your cloud service. There’s another drawback to using a cloud as your backup. The risk is small, but you do have to realize that the cloud could blow away along with your data. It’s happened before.


    _
    °v°
    /(_)\
    ^ ^ Mark LaPierre Registered Linux user No #267004
    https://linuxcounter.net/
    ****

  • I listened to guy’s video. Pretty much sounds like what command line utility

    badblocks

    does. The only viable I hear is its latest addition when this utility flips all bits and writes into the same location. In fact it is anything
    (containing both 0’s and 1’s) that is to be written to the sector, then on write the drive firmware kicks in as the drive itself on write operation reads written sector and compared to what was sent to it and if it differs it labels sector, or rather block I used wrong term just after this guy as I was listening while typing. Anyway this forces discovery and re-allocation of bad blocks. Otherwise bad blocks are discovered on some read operation, if CRC (cyclic redundancy check sum) on read doesn’t match, the firmware reads the block many times and superimposes the read results, if it finally gets CRC match it happily writes what it came with to the bad block relocation area, and adds block to bad block re-allocation table. After some number of reads if firmware doesn’t come up with CRC match it gives up, writes whatever superimposed data is. So these data are under suspicion as even CRC match doesn’t mean the data is correct. This is why there are filesytems (ZFS to name one) that store really sophisticated checksums for each of files.

    Two things can be mentioned here.

    1. If you notice that sometimes the machine (I/O actually) freezes on access of some file(s), it most likely means the drive firmware is struggling to do its magic on recovery of content and re-allocation of newly discovered bad blocks. Time to check and maybe replace the drive.

    2. Hardware RAIDs (and probably software RAIDs – someone chime in, I’m staying away from software RAIDs) have the ability to schedule “verify”
    task. This basically goes over all sectors (or blocks) of all drives thus:
    a. forcing drive firmware to discover newly developed bad blocks; b. as drives when working on badblock will often time out, then RAID firmware will kick this drive out, and will start rebuilding RAID, thus re-writing content of bad block on the drive developed bad block. In this case the information comes from good drives, thus less likely to be corrupted. What I described is best case scenario, not always drive will time out… so even hardware RAIDS are prone to actual data corruption, Bottom line, it is good to migrate to something like ZFS.

    Thanks. Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Linux mdraid can do verifies. Recent versions of CentOS should have a cron job that does this. Check out /usr/sbin/raid-check.

    –keith

  • Hey Valeri,

    What you say is true and should be considered when he rebuilds his system.

    The point of my post was to suggest a way for the OP to recover his data at a reasonable cost using Spinrite.

    One point you may be confused with is that Spinrite does not care what file system you have on your disk. Spinrite does not mount the file system. It access the disk storage media one sector at a time using the actual drive hardware/firmware to read the data from each sector. If it does not succeed in reading the sector it keeps trying using various methods until it gets a read or until it is satisfied that the sector is unreadable.

    When it gets a read it writes it back to the center of the track where it’s supposed to be and checks to be sure that it worked by reading it back again.

    As Spinrite progresses across the storage media the drive firmware manages the marking of truly unrecoverable sectors as bad and the other sectors as good.


    _
    °v°
    /(_)\
    ^ ^ Mark LaPierre Registered Linux user No #267004
    https://linuxcounter.net/
    ****

  • It sounds like OP has fried circuit boards on at least two, but likely all three drives (from what I read OP describes). If he were able to have drives visible on the bus, then reading drive content sector-by-sector or block-by-block would be the way to create raw drive image – which would be first step of data recovery.

    No, I understood that from guy’s oral presentation. That’s why I compared spinrite earlier version with command line “badblocks” tool, which does the same: writes/reads/compares one block (read or write unit) at a time. badblocks if I remember correctly has flag to not destroy data on disk for which it reads and remembers sector, does its read-write-compare test, then writes original data back in place.

    Yes thanks for the Spinrite reference. This adds to arsenal of recovery tools when drive is visible on the bus, and only the surface of platters got bad. Often GUI tools are more transparent for humans, thus diminishing chance of blunders compared to UNIX command line tools.

    I sent OP off the list references to recovery services people I know personally used with success (my own plan is: I have a good backup ;-). Unless I’m misreading what he writes his case is burned circuit boards of drives. Then data on the platters most likely are intact. Which is the most encouraging if recovery company is involved, this pure “mechanical”
    thing is most trivial for them (even though it involves clean lab and fancy equipment and capable techicians).

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

LEAVE A COMMENT