Analyzing The MBR

Home » CentOS » Analyzing The MBR
CentOS 19 Comments

Is there any tool for analysing the MBR on a computer?
I know one can just dd it and see roughly what it contains. But surely one should be able to work out the exact content of the MBR and the neighbouring sectors read at boot time?

I had a difficult day, probably due to my ignorance, which would have been solved at once by such a tool. I had taken one of three hard disks out of my home server
(to see exactly what it was, as smartctl said it was not in its database) and this had the effect of altering the order of the disks in the BIOS, preventing re-booting.

It was only after I had re-installed CentOS in a spare partition that I realized what had happened. Incidentally, before this I had tried what I take to be the standard way of solving this problem, by running a CentOS Live USB stick, mounting the root partition and trying to chroot to this, but that did not work –
chroot on the stick would not run, and neither would chroot on the disk.

I’m wondering if there was some other method I could have tried?
For example, I tried running a Fedora netinstall USB stick, which has a “Try to repair the system” option in Troubleshooting. This saw the system OK, but did not have grub-install on it. As far as I could see, none of the CentOS install disks has such a tool on it?

19 thoughts on - Analyzing The MBR

  • The dd command shows you **exactly** what is in the MBR and, if you want, the following sectors. But the following sectors are not particularly relevant to boot. THe MBR contains the boot record and the partition table. There is not room for anything else. But your problem is not with the MBR so the solution does not lie there.

    Although you have already reinstalled, you might have recovered by changing the boot order of the hard disks in the BIOS configuration for your computer. Most computers have that capability these days. You might also have booted to a recovery disk (most install DVDs have recovery mode as a menu option) and noted the sequence in which the drives were recognized by BIOS by using the dmesg command. Perhaps you plugged the drives back into different locations on the bus. Are they PATA or SATA?

    As far as what appears to be your original problem, discovering information about a hard drive, the smartctl command can give you plenty of information about a drive even if it is not in the database. You just need to use the -a or
    -x options. You could also have used fdisk -l /dev/ to display the basic capacity information about the drive. And the dmesg command can also give you information about your hard drives and the way the kernel sees them before you need to use rescue mode.

    I hope this helps a bit for future issues.

  • ARGH! I just had to deal with two servers that failed, and it took me two and a half *days*. Some things to take away:

    1. Disconnect everything else that might be plugged in (like a RAID box on a controller that would let the system boot off it).
    2. Go into the BIOS, and reorder the HDD so that the one you want it to boot off of is first.
    2.a. Don’t assume, when you plug stuff back in, that the BIOS won’t decided on its own to be “helpful” and reorder….
    3. Linux rescue, fdisk -l
    4. grub-install

    mark

  • David Both wrote:

    Is this strictly true – that only the MBR is read at boot-time?
    I have saved and re-inserted the MBR on occasions
    (not in the present case)
    and it did not seem to be sufficient. I notice for example that gparted leaves quite a lot more space at the beginning of the disk, as does Windows.

    Well, that is more or less what I said –
    I only realized that was the problem after trying many other solutions. In self-defence I would say that I read half-a-dozen articles on the web telling you what to do if grub fails, and none of them mentioned the possibility that the hard disk order might have been changed – in my case by running the system with one disk removed, and then with the disk back in place.

  • the MBR has two elements.

    A) the boot code, which is read and executed by the BIOS at boot time, only on the boot drive and B) the master partition table, which is read on any drive when its inserted

  • John R Pierce wrote:

    That doesn’t really answer my question;
    I know (roughly) what the MBR, ie the first 512 bytes, contains. But I notice that my laptop, for example, leaves 64 sectors for something at the start of the disk;
    and when I get the first 2048 bytes with
    sudo dd if=/dev/sda of=sda.mbr bs 48 count=1
    I see that there is plenty on the disk after the first 512 bytes.
    (Admittedly Windows might have written that, as I have a Windows partition.)
    But does Linux ever write anything in bytest 513-1024, for example?

    Does grub2 write more than CentOS grub?
    It seems to me to be difficult to have CentOS and Fedora as alternative OSs on the same machine?

  • traditional PC partitioning tools, dating back to MSDOS, put partitions on ‘cylinder’ boundaries. this is a bad idea on modern disks, whether they be SSD’s that often have 128K physical write blocks, or newer HD’s with 4096 byte physical sectors, or raids where there’s several of the above striped together.

    the rest of the space between the sector 0 MBR and the first primary partition is completely empty, nothing puts anything there.

    I always start my first partition at a round number SECTOR, like 128s
    (which is 64k bytes assuming 512B sectors) so everything is aligned on a power-of-two boundary…. I use parted, rather than fdisk, to do this. something like…

    parted -a min /dev/sdb mklabel gpt mkpart pri 128s -1s quit mkfs.xfs -L datavol1 /dev/sdb1
    echo “LABEL

  • Timothy Murphy wrote:

    Or what might be happening is that it’s leaving 1M or so, to align the partitions properly for best access times. Certainly, I’ve been doing that manually, or in our kickstart file, for several years now.

    mark

  • John R Pierce wrote:


    I find mkpart pri 0.0GB x.GB
    *always* gives me aligned partions (and parted – talk about user hostile programs! “Not aligned”, with not a clue as to what it actually wants….)

    mark

  • I’ve taken to always running parted with -a none, as its alignment rules are based on old concepts like cylinders, heads, which are meaningless and even WRONG on today’s storage devices. (63 heads, 255 sectors is not uncommon, this means everything at a cylinder or head is on an ODD
    sector boundary, ouch!)

  • New bigger disks may use 4k physical sectors but report 512 for backwards compatibility. If you don’t write 4 contiguous sectors it has to read, wait for the disk to spin back around, then write, merging in what you did write. Which means writes will be very slow if your partitions are not aligned on 4k boundaries. I haven’t had to deal with many of these yet so I’ve mostly just installed gparted from epel and used its defaults rather than doing the math myself.

  • John R Pierce wrote:
    hostile programs! “Not aligned”, with not a clue as to what it actually wants….)
    are based on old concepts like cylinders, heads, which are meaningless and even WRONG on today’s storage devices. (63 heads, 255 sectors is not uncommon, this means everything at a cylinder or head is on an ODD
    sector boundary, ouch!)

    Dunno ’bout that, but my method is what I found after much googling, and seem to provide better throughput on the drives, *esp* if they’re large (>
    2TB), and have physical sectors of 4k, but which are usually presented as the traditional 512B

    mark

    PS: I HATE manitu, I *hate* IX, and I’ve emailed them to that effect, the
    *ssh*l*s.

    I’ll ask again: is it against the rules of this list to have whitelisted posters?

  • some new disks even report they are 4K and allow 4K sector operations, which file systems like XFS support natively. This is going to be increasingly common going forwards.

    [btw, thats 8 consecutive 512B sectors for 4K]

  • John R Pierce wrote:

    You say that with supreme self-confidence, but I have just looked at 3 disks with eg
    [tim@helen tmp]$ sudo dd if=/dev/sdb of=sdb.mbr bs 48 count=1
    and in each case there is material after the MBR, eg “od -c” shows two have the following at the same place
    0001360 376 L o a d i n g s t a g e 1 .
    0001400 5 \0 . \0 \r \n \0 G e o m \0 R e a d
    0001420 \0 E r r o r \0 273 001 \0 264 016 315 020 F
    while the third has
    0001420 \0 353 376 l o a d i n g \0 . \0 \r \n \0
    0001440 G e o m \0 R e a d \0 E r r o r

    I don’t know if this is relevant, but the first two were on machines running CentOS and grub, while the third was on a machine running Fedora and grub2 .

    I notice that the file command, surprisingly, seems to analyze the MBRs, thus answering my original query, though with slightly different outputs from grub and grub2 machines. Eg
    [tim@helen tmp]$ file sdb.mbr sdb.mbr: x86 boot sector; GRand Unified Bootloader, stage1 version 0x3, stage2 address 0x2000, stage2 segment 0x200, GRUB version 0.94;
    partition 1: ID …
    [tim@rose ~]$ file sda.mbr sda.mbr: ; partition 1: ID=0x7, active, starthead 1, startsector 63,
    167774145 sectors; partition 2: ID=0x83, starthead 254, startsector 167774208, 8388608 sectors;

  • Les Mikesell wrote:

    But even gparted leaves some maths to be done, eg since it uses MiB’s it seems logical to use GiB’s which means difficult calculations like 80×1024 = ?

  • No, usually I’m just adding the whole thing as one partition so however it wants the units it is still the default.

  • That last sentence is simply wrong. GRUB will try to install stage1_5 of the boot loader there if space is available. This is to eliminate the problem with the boot sequence breaking if the stage2 boot loader ever gets physically moved on the disk. The stage1_5 boot loader understands one type of filesystem (there is a different stage1_5 for each supported filesystem), and loads stage2 from there. There is simply not enough room in the MBR for code to handle anything more complex than a short list of absolute disk addresses.

    If there is not space for a stage1_5, GRUB will still install successfully
    (you see an error message with “This is not fatal”), but will have to be reinstalled if the stage_2 file ever moves to a different physical location on the disk. This can result in a time bomb, because booting can work successfully for a while using the data still present in what are now free blocks in the filesystem, and will fail when some totally unrelated action causes those blocks to be rewritten with something else.

    A lot of other boot loaders and boot managers do something similar with that space.

  • I believe that the whole of the first track on a disk used to be “reserved”
    or rather used to contain the MBR only (and anything else needed by the boot loader) and the first filesystem on disk used to start at track 1. Of course, with the larger disks this got more complicated.

    Cheers,

    Cliff