How Protect Bash History File, Do Audit Alike In Server

Home » CentOS » How Protect Bash History File, Do Audit Alike In Server
CentOS 21 Comments

hello,

I want to protect the history file from deleted for all users except user ‘root’ can do it, is that possible?
For my server, many users can log in with root from remote through ssh, so I can not trace which guy do wrong things. So I decide to create new account for every users and let them use ‘sudo’ then I can trace which guy typed which command and what he did. However, even if I create new account for every user, they also can delete the history of them self easily.

How should I do. I believe everyone encountered such things normally. I think there is a gracefully solution for it as I am not experience on server manage. So any suggestions for how to trace user like to write down which user did as an audit trail and let it can not deletable exclude root user?

Thanks!

21 thoughts on - How Protect Bash History File, Do Audit Alike In Server

  • Greetings,

    Perhaps you can look at inotify, put the .bash_history on its watchlist and then rsync the changes to a remote host.

    Haven’t tried it though.

    HTH

  • Heng Su wrote:

    So, you’ve got someone inside, who’s doing nasty, or stupid, things?

    The most obnoxious, stupid idea I’ve had to deal with was a few years ago, when the company I was subcontracting for put something in the .profile to log every. single. command. a developer issued….

    However, since you’ve set up sudo for them, their commands should *also*
    be in /var/log/secure. Of course, what you need is a script to grab that, and attach to it which user had sudo’d.

    Hmmm, as I type that, I just got to thinking: do they need all root privileges, or do specific users only need certain commands? If so, it’s easy enough to limit what commands they’re allowed to run under sudo – man sudoers.

    mark

  • No, it is not a common situation. Normally you should not let anyone you don’t trust become root. For fairly obvious reasons…

    First, why do so many users need the root password? If they are developers testing things, give them their own VM to break. If they are doing a few routine things, make them log in as themselves and use restricted sudo commands (i.e. don’t permit ‘sudo su -‘. In any case, backups are your friend. Keep copies of anything you might need updated with frequent rsync’s from a different, more restricted machine – including the log files you might want to track.

  • Hi mark,

    Great! I think those you mentioned is exactly what I want. Normally, I want to trace which guy got wrong things in server.

    I tried the link that Harold provided find it’s a good idea to protect log files, however, I want to know is which guy type which command.

    the /var/log/secure is what I want, thank you so much.

    I can not limit the sudo commands , like cp command.

    For instance, a small team 4 developers, we deploy some code file to this server, however, someone let say new guy overwrite wrong file. I
    need to trace on it and inform him carefully.

    thanks.

  • Greetings,

    SCMs like SVN, git etc. are exactly for such events.

    You are taking backups, aren’t you?

  • Yeah I know the bakups, It’s only for making sure server running properly quickly after incident. However, you don’t know which guy got wrong things. Normal flow is get codes from SCMs repository or do CI server, however, you know some small company got such thing messy (my current company, lol ^_^). Sometime you have to update only one file of the project.

  • Why does it need root permissions to update this file? It doesn’t cost anything to add a user to own your application’s resources.

  • OK, assuming there is an jboss application server running under user
    ‘jboss’ in PRD server, and we have 4 developers want to update the jar file in that server. they always login use same user ‘jboss’ to do updating file in server, how can I tell which guy doing what things cause the server down as they use same user account ‘jboss’?

    So I don’t know how should I do as I am a shoddy server admin, so I use root to maintain the application server. then create 4 account in server for individual developer. So if they want copy, move or other operations on those deploy folder or files. Let them use sudo. Now I got all commands they did in /var/log/secure, ^_^

  • Hi Harald,

    Thank you so much to guide to correct path and let me know how to move on, learn more from you. Indeed I am a developer not an admin, that’s a good question for the headers of my company why there is no admin to manage the server in our company. Anyway this can not controlled by me, I am a developer leader just want to make sure my team member do correct things in server.
    I really like linux especially CentOS, I want learn more from you. Thank you again.

    Best Regards.

  • Heng Su wrote:

    Now I have a picture of your problem.


    VCS’s that let multiple people check the same object out at the same time…. You’re *exactly* back where you were before people were using VCSs.

    It sounds like the use of the VCS is no different than saving them in a backup directory, which is *not* how it should be used.

    Set up a real version control system. Configure it so that they *must* check out with a lock, so no one else can edit it. Extract to test, and test the damn thing. Then label it. Then, when they agree it’s ok, you, the admin, get to install it, NOT THE
    DEVELOPERS!!!!! AND you extract it by label (or whatever the VCS calls it)
    to production directly from the VCS. You’re guaranteed that the wrong file won’t be moved to production.

    Doing it that way, it’s *very* easy to roll back (another thing VCSs are for).

    And don’t let them do *anything* with production: that’s your job. Right now, start logging every time something wrong goes into production. If what what I read between the lines that you’re suggesting is the case, it should take a week or so to have a number of problems; then dump it on your manager, and tell them this is a problem, here’s the evidence of the problem, and here’s the answer (as above, with you as admin, as the gatekeeper to production).

    mark, many years as developer, 7 years with PVCS as config mgr,
    and plenty of years as sysadmin

  • Errr, what? No sensible VCS forces you to wait for someone else to finish their portion of the work.

    That part is true enough, although it is not so much who does the work, it is following the procedure. If you are going to be picky about who does what, there should really be a QA person involved that makes the actual decision about what version should be running in production in between the developers making changes and the operators doing the installs.

  • Les Mikesell wrote:

    You’re wrong. I’ve worked in small and large teams, and *ALWAYS* we checked out with locks. If two people need to work on one file, then either they need to work together on one copy, and check it back in together, or the file needs to be split into more than one, so that one person can work on each. This is the way it was at a medium sized environmental company I worked at (that was working on ISO 9000), and it was the way it was at a Baby Bell I worked at, and it was the way it was when I worked on the City of Chicago 911 system.

    I have vehemently been against the fad of the last half a dozen or so years, with multiple people checking out and working on the same file. I’ve seen hours or days of a developer’s work wiped out, when a team lead hacked some quick fixes, then merged the file back in.

    I haven’t had q/a move to prod; that was always the prod admin’s job, after q/a was done, and had promoted it to prod.

    mark

  • If you want to force your team to wait for your change, fine – and sometimes it is even a good idea, but the tool should not make that decision for you.

    You can’t do that without knowing it. If the user ignores the other changes in a conflict or doesn’t resolve them correctly, blame the user just like you would if he typed that in as part of his own changes.

    OK, both QA and operations should agree – QA as to whether a version can be released and operations as to when it happens.

  • Les Mikesell wrote:

    Yes, I do want to force them to wait for what one person’s working on –
    it’s not like everyone else isn’t working on *other* things. And each should be independent – changing an interface; that is, the parameters a function (sorry, “messages that a method) is expecting is always a big deal.

    Yes… and one of the main points of a correctly configured VCS is explicitly to prevent one person from screwing up others’ work.

    Absolutely, though in a small shop, that tends to be developers and admin. Not that many places, unfortunately, have one or more folks who are only q/a.

    mark

  • Interface/protocol changes aren’t particularly tied to a single file or even a single project. If you are going to make changes that affect other things either everyone else needs to know what to expect or you need to be working on a branch that is kept isolated until everything else matches. It doesn’t really matter if the file was locked when you make that change or not.

    Or worse, the developer may also change hats and be the admin… But developers should be doing new, experimental things and admins should insist on testing before going to production.

  • Weak! Real fascists use sudosh!

    Rui

    ps: I’m sure there are some fascists who are more fascist so feel free to point out even better options ;)

  • Am 08.08.2012 23:03, schrieb m.roth@5-cent.us:

    It seems you are vehemently against the development model the Linux kernel is thriving on. Or perhaps you just never had a chance to look at git.

    T.

  • +1

    bash_history is not a log for the admin, it’s a convenience for the user. Users who want to hide their tracks can unset HISTFILE or switch to a different shell. Process accounting is the only solution that’s even remotely reliable.

LEAVE A COMMENT