Backup PC Or Other Solution

Home » CentOS » Backup PC Or Other Solution
CentOS 44 Comments

I list, I’m new with backup ops and I’m searching a good system to accomplish this work. I know that on CentOS there are bacula and amanda but they are too tape oriented. Another is that they are very powerfull but more complex. I
need a solution for small office for disk storage and I found Backup PC. Many people say that it is great for small stuff and for great number of data.

What do you mean about Backup PC?
Any experiences?
What solution do you use?

Thanks in advances.

44 thoughts on - Backup PC Or Other Solution

  • Alessandro Baggi wrote:

    Les, who I’m sure will hop in, likes it. We have a home-grown system that automates rsync.

    mark

  • I’ve been using BackupPC for several years for my 10 hosts, and works extremely well, however it can take a lot of disk space, so I’d recommend a dedicated drive for the backups. I’ve restored many files over the years but haven’t as yet needed to do a bare metal restore.
    One further recommendation I have is that you might also consider a second host that backs up the primary backup host in the event that it fails, which is what I’m doing, and because I have it I backup all my hosts to 2 different servers.

    The BackupPC list is very active at times and can provide you with lots of tips and help.

    Pete

  • My assistant liked BackupPC. It is OK and will do decent job for really small number of machines (thinking 3-4 IMHO). I run bacula which has close to a hundred of clients; all is stored in files on RAID units, no tapes. Once you configure it it is nice. But to make a configuration work for the first time is really challenging (says one who still managed to configure it ;-)

    Good luck!

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • This sounds like Apple borrowed your idea for their time machine (I bet you are doing it for much-much linger than Apple time machine exists)!

    Valeri

    ++++++++++++++++++++++++++++++++++++++++
    Valeri Galtsev Sr System Administrator Department of Astronomy and Astrophysics Kavli Institute for Cosmological Physics University of Chicago Phone: 773-702-4247
    ++++++++++++++++++++++++++++++++++++++++

  • Valeri Galtsev wrote:
    found Backup

    We try to keep five weeks. Except for the giant RAID boxen, which are more, ahhh, challenging to backup (or not, as the case may be).

    mark

  • Don’t dismiss Amanda it works well in a disk based setup. I don’t bother with the spooling disk though. I back up to virtual tape slots on an external disk and rotate three external disks; two are in the firesafe at work, one is on top of my PC.

    —–BEGIN PGP SIGNATURE—

  • I can say the same about Bacula, just spooling to virtual tape slots on external disks work just fine here, it has worked more than a decade w/o a hitch and I’m not changing it for the sake of having a change any time soon (originally was backing to an external SCSI tape using DDS2 media virtually using the same config files but rotating multi-TB external disks is cheaper & easier).

  • I’ve been using BackupPC to backup about 25-30 servers and VMs for a couple years now. My backup server has a 20TB raid dedicated to BackupPC, using XFS on LVM, on CentOS 6.latest… That backup raid is mirrored to an identical server in a seperate building via drbd for disaster recovery. I keep 12+ months of monthly full backups, and 30+
    days of daily incrementals. The deduplicated and compressed backups of all this take all of 4800GB, containing 9.1 million files and 4369
    directories. The full backups WOULD have taken 68TB and the incrementals 25TB without dedup.

    I’m very happy with it.

    its a ‘pull’ based backup, no agents are required for the clients… it can use a variety of methods, I mostly use rsync-over-ssh, all you need to configure is a SSH key so the backup server’s BackupPC user can connect to the target via SSH as a user with sufficient privs to backup the desired file systems. for my couple windows servers, I install a cygwin based rsync. BackupPC also can use nfs, smb, and tar-over-ssh as backup methods.

    adding a new host to the backup service takes me about 5 minutes. it would probably take even less time if I bothered to document and/or automate the process :)

    users can be given access to their own backups via the web interface, and they can either download single files, a tar or zip of a directory tree, or tell the server to push a restore onto the original target.
    you can download or restore ANY version of any file thats in the hive.

    the major downside is that ALL the backups have to be stored on one monolithic file system, and it uses tons of hard links. If you use XFS, this is not a problem. maintaining a backup of your backups can be done a couple ways, I am using drbd to a mirror server, but there’s also a provision I haven’t explored for generating archives.

  • Il 07/05/2015 00:47, John R Pierce ha scritto:

    Hi John, when disk is filled, on bacula we can recycle disk volumes. What’s for BackupPC? There is automatic backup deletion over retention time?

    Thanks in advance.

  • Hello Alessandro,

    Wednesday, May 6, 2015, 9:21:10 PM, you wrote:

    Everybody has its favorite backup program, but why rely on only one system?

    I have to backup 8 servers and use three backup systems in parallel.

    — BackupPC. Easy to use, nice user interface with graphical recovery of individual files. A pain to set up, basically, all errors in setup give the same error message. Reduces used space by hardlinks. Data structure is not transparent, so no recovery by browsing the storage directories.
    — storeBackup. easy to use, easy to set up, but no nice user interface. Reduces used space nicely by using hardlinks. Used as second line of defence. Stores 1:1 copies of original filesystem, so easy browsing.
    — tar. Used for disaster recovery. Produces large dumps. Only use it for system data, not for user data.

    Span all three systems on two independent backup machines. Put these backup servers into independent locations and sleep better :-)

    best regards

  • Am 07.05.2015 um 08:35 schrieb Michael Schumacher :

    just another one (rsyns/hardlink based):

    rsnapshot

    its used here extensively and just works (storage browsable).

  • W dniu 06.05.2015 o 21:21, Alessandro Baggi pisze:

    BackupPC is good, howewer it’s a pity you can’t search for a file in GUI. But it works well, i’m backing up 32 hosts (servers, desktops).

    Can somebody tell me why it’s not available for CentOS7?

  • Il 07/05/2015 11:24, Marcin Trendota ha scritto:

    I don’t know why and don’t know if in previous CentOS releases was included.

    BackupPC is available for C7 from nux repo, but this is an external repo.

  • W dniu 07.05.2015 o 11:46, Alessandro Baggi pisze:

    It is in EPEL.

    Good enough, thanks for info.

  • I wonder why nobody has yet mentioned rdiff-backup. It combines browsable directories with multiple versions – the version data is stored in a separate rdiff-backup-data subdirectory (one per backup task).

    One downside is that rdiff-backup causes a lot of network traffic. For that reason I currently use rsync to copy over network, and then I use rdiff-backup locally to create a repository with multiple versions.

    Another system that we use is rdiffweb. It uses rdiff-backup over network and adds a web interface for clients to browse and restore files or directories. I did not personally set it up, but it seems to work fine.

    – Jussi

  • directories with multiple versions – the version data is stored in a separate rdiff-backup-data subdirectory (one per backup task). that reason I currently use rsync to copy over network, and then I use rdiff-backup locally to create a repository with multiple versions. and adds a web interface for clients to browse and restore files or directories. I did not personally set it up, but it seems to work fine.

    I am one of the people who use rsync with hardlinks. Reason is very simple and even humble: I built my home backup server around a OpenWrt –
    Seagate dockstar if you want to date that – box and an external backup drive. So I wanted something low resources that did not require me to install any packages.

    That script grew a bit (or a lot) and became my old job’s backup code. But, I admit one think it does miss is having a convenient way to look for a file, specially if you physically rotate drives. If rdiff-backup will tell when was the last time a file has been backed up/touched even if drive with said file is not mounted, I will need to get to learn more about it.

  • Geenhuizen wrote:

    I’ve been running BackupPC on two home servers (in different places)
    running CentOS for many years, and am very happy with it. I actually backup to a different disk on the same server, and then archive that on an external disk every couple of months.

    The worst thing about BackupPC is the insane error message
    “Unable to read 4 bytes”, which comes up if anything is wrong. Possibly the worst error message anywhere?


    Timothy Murphy
    gayleard /at/ eircom.net School of Mathematics, Trinity College, Dublin

  • I use rdiff-backup, but I hesitate to recommend a tool that has been unsupported for over 6 years and does have quite a few bugs.

  • Il 07/05/2015 11:55, Marcin Trendota ha scritto:

    Then, I’m trying BackupPC 3.3.1, Installed, configured, CGI configured.

    I’ve some questions:

    1) There is a systemd start file?
    2) Apache on C7 seems not have mod_perl support. There is a way to accomplish this?

    Thanks in advance.

  • –1. Redundant copies.

    2. Sometimes your filesystems are larger than the largest drives.
    For example, I’m currently seting up backups for a 24TB filesystem
    where a network-based DR is not feasible (the average rate of
    churn exceeds the available network bandwidth). Good luck trying
    to find drives that big.

    I had a sense of deja vu the other day; I was taken back to the time when I first ran into a filesystem that was larger than the size of a backup tape and the software I was using at the time (Amanda)
    had the assumption that a single filesystem was smaller than a single tape. (I understand they fixed that assumption shortly thereafter, but I had already moved on to another product.)

    For the record, my favourite product is Bacula.

    Devin

  • thats an rsync protocol message, and yeah, debugging connection/authentication issues is a bit ugly.

  • my year of monthlies and month of dailies of 25 servers has been more or less constant size for a year or two now as it deletes the oldest backups. I don’t think there’s an option to delete based on volume free space, its age based, so you adjust the retention age to suit.

    the compression and dedup works so well it amazes me, that I have about
    100TB worth of incremental backups stored on 6TB of actual disk. My backup servers actually have 32TB after raid 6+0, but only 20TB is currently allocated to the BackupPC data volume, so I can grow the /data volume if needed.

  • John R Pierce wrote:

    I’m sure you are right. But I use rsync several times a day, and I have never received this message.

    It’s not ugly, it is inexcusable.

  • I just tried a command line rsync to a host that wasn’t listening to SSH
    at all, and got..

    ssh: connect to host castillc2-PC port 22: Connection timed out rsync: connection unexpectedly closed (0 bytes received so far) [receiver]
    rsync error: unexplained error (code 255) at io.c(600) [receiver=3.0.6]

    not exactly the pinnacle of clarity.

  • Yeah, well, but it’s free. I’m not sure you can complain too much in that case. 8-)

  • Sorin Srbu wrote:

    I find this comment, often made, completely unacceptable. The implication is that inferior code is OK
    if the developer is not being paid.

    (Actually, the premise is probably nonsense, as most Linux developers _are_ paid, even if formally their pay is not specifically for Linux development. But presumably the company that pays them believes that it is of value to the company to have a Linux developer on board.)

    But is Linux code in fact inferior to code produced by Microsoft, say?
    I don’t think so. And I don’t think Linux developers are less keen to improve their code. Just the opposite.

  • It may not be rsync fault.

    I vaguely recall that BackupPC uses an ancient (~2006) Perl rsync library, which, for instance, does not support compressed transfers. Maybe that terse message is all that BackupPC gets from the library. :-)

    Mihai

  • there’s a lot of settings… but these are probably applicable…

    Main Config:
    Schedule:
    FullPeriod: 27.9
    FullKeepCnt: 24
    FullKeepCntMin: 8
    FullAgeMax: 360
    IncrPeriod: 0.97
    IncrKeepCnt: 30
    IncrKeepMin: 1
    IncrAgeMax: 30
    IncrLevels: 1

    on a few hosts where dailies are not appropriate due to how long they take, I override to do weekly incrementals instead

    Schedule:
    FullPeriod: 89.6
    FullKeepCnt: 2
    FullKeepCntMin: 2
    IncrPeriod: 6.97
    IncrKeepCnt: 15
    IncrAgeMax: 100

    many of those are probably defaults, but I didn’t keep track which ones I modified

    another thing, many of my servers are SQL database servers (mostly PostgreSQL and oracle). I do NOT backup the sql data file systems directly with BackupPC, instead, I have the SQL do archiving or scheduled dumps, and I backup those archive and dump destinations…

  • The difference is that a large portion of the FOSS corpus, if not a preponderant majority, is ultimately dependent upon the interest of the people responsible for its existance and not the people using it. Once a project’s core team either loses enthusiasm for something, or have otherwise moved on in life, their project oft-times is left without any meaningful support.

    If a project is backed/picked up by a corporation, say Redhat or Oracle, or a foundation, say Apache or LibreOffice, then it may have a future more or less independent of any single individual or group. Otherwise it does not.

  • Commercial software and company-backed F/OSS software gets abandoned all the time.

    – OpenOffice may well die due to brain drain from LibreOffice. They’ve both got big corporate backers.

    – The MySQL mailing list is getting a tiny fraction of the traffic it once enjoyed before the Oracle takeover; MySQL won’t go away any time soon for reasons of inertia, but MariaDB and NoSQL are surely taking large bites out of its user base.

    – Remember ESD and aRTS? They’ve all but been killed off by PulseAudio. They were the “standard” of their time, backed by major Linux distributors.

    – How many “standard” window managers has GNOME had over the years?

    – How many desktop managers and GUI toolkits preceded GNOME/Gtk? NeWS, NeXTSTEP, CDE/Motif, Tk, all with big-name support in their day.

    – Adobe’s killed off dozens of products over the years. FrameMaker, Director, Flash Builder, PageMaker, Contribute, Fireworks…

    – Got a smartphone? How many apps have you bought that never went anywhere after they got your money? There’s more than one in my case, at least.

    At least with F/OSS, you have the option of taking over maintainership of an abandoned code base. My company has done that a few times now, as it was easier to do that than switch to the abandoned package’s replacement.

  • Frame isn’t dead, my wife is a technical writer in the EDA (electronic design automation) business, and thats about all they use.


    john r pierce, recycling bits in santa cruz

  • When I think of FrameMaker, I think of the program that started out on Solaris, then moved to other big iron Unices and OS X. Wikipedia informs me that it’s been Windows-only for about a decade, which must be how it dropped off my radar.

    Still, it’s good to know the old thing is still shambling along in some form. I was impressed with it when I used it way back when.

  • That does bring back memories of Solaris and Framemaker from the mid
    90’s. We had folks using Frame as a word processor, absolutely insane, especially since they had Applixware (originally Aster*x) installed on the same machines. Fun times!

    //steve

  • Thanks, very much appreciated!

    I’ll play around with the settings a bit more, but yours is a good starter.

    Thanks again!

  • How did you get away with using 27,9 on Fullperiod? 8-)

    I’m seeing “Error: No save due to errors” and “Error: FullPeriod must be a real-valued number”, unless I change the value to e.g. 27.

    This is on BPC v3.2.1.

  • . Why can’t everybody follow the standards and use a comma when writing decimals. ;-)

    Thanks for the heads-up!

  • our standard is a .

    comma is a 1000s seperator.

    thats the best part about standards, there are so many to choose from!!

  • I have had good experience with mondrescue (mondoarchive, mondorestore)
    for years. It’s a free, active project.

    See: http://www.mondorescue.org/

    We are backing-up about 20 production servers (using cron jobs) weekly. Bare-metal recovery has been successful as well as cloning.

    Their mailing list is helpful and polite.

    I has saved my neck many times during the last 5 years.

    Although I have no experience with mondorescue on CentOS 7, I recommend it at least for the other versions.

    Nick