Kickstarting Several *different* Setups

Home » CentOS » Kickstarting Several *different* Setups
CentOS 13 Comments

I’m currently using kickstart for installing new servers and have run into the following scenario: all the machines will have the same basic setup of packages, however they will each be configured for a specific task. For example, some will be mail-serving machines and won’t need things like a web or mySQL server installed. Others will be web servers and do need those packages.

So my question is, is there some way do determine via kickstart, what to install on that machine based on some criteria, possibly the IP that’s being assigned to it, or MAC address, or something …

Right now I have one single kickstart setup that all the machines are going to use, then I manually install the additional packages one by one, on each machine. It would be helpful if I can somehow tell kickstart to do that.

Suggestions?

13 thoughts on - Kickstarting Several *different* Setups

  • If you just want to use kickstart, it would be pretty simple to serve these via HTTP, and have a simple script in PHP or similar that takes the requesting IP and uses it to choose which version of the kickstart to serve.

    I would suggest that the “right way” would be to kickstart all your machines the same way, and then use a configuration management tool
    (like Puppet or Chef) to customize them. This approach is likely to be more work, but also more maintainable in the long run.

  • Or, if you just want the packages that a custom kickstart would install, use a basic kickstart to bring it up, then run your own script (from an nfs mount, scp’d over, pasted into a command line or whatever you might find easier than learning puppet). The script just needs to determine the rest of the packages needed for this particular server and ‘yum install ….’ them.

  • Tom: Thanks for the suggestion. I’ll look into those tools.

    Mark: Yes, they are using pxeboot. Right now when they boot up, the pxe config offers two options, 32- and 64bit. Are you suggesting I create multiple entries that one selects based on what the machine is going to be?
    Is there a way to have this done automatically so I don’t have to physically have to do that for each machine, but rather turn the thing on and have it determine what needs to get installed on that particular machine?

    Les: I was hoping for some way to have it all automated so if for some reason I’m not in the building, I can instruct someone else to reboot, pick the kickstart option in the pxeboot menu (be it a web, mail, db, or user server) and a few minutes later have a working machine without them needing to do anything else afterwards. Mirroring the data files from backup is a single step that can be done by any monkey, it’s the configuration, or the manual selecting of a script to run, something they can easily screw up, that’s I want to avoid.

  • There’s always a tradeoff in hiding what is being done between simplifying things and making it completely impossible for anyone else to understand it or fix it if it breaks when you aren’t there. I like a little balance between the extremes. Like making the scripts that do the work visible, but including some sanity checking so it won’t run on the wrong machine – or anything else that you can guess ahead of time someone might do wrong with it. But you could embed the same thing in a cgi kickstart URL if there is some way it can deduce the right file to deliver or make your db restore process add/configure any missing packages needed at that point.

  • Gotcha. Thanks all! You guys gave me the answers I needed to know and hear. For the immediate futre I will likely go with multiple pxeboot options which then picks the specific kickstart file. It’s easy for me to put a label on the server that says ‘web’ or ‘mail’ etc. Then just pick the same from the menu.

    Eventually I’ll delve deeper into custom and automated setups.

  • Seconded.

    Personally, I recommend either ansible or bcfg2 over other tools. Puppet has a larger user base, but when I talk to users at conferences
    (such as LISA), ansible and bcfg2 users tend to like their tools, while an awful lot of people dislike Puppet but use it anyway due to inertia.

  • There’s also saltstack which is one of the newer of the bunch. It has some chance of working reasonably across different platforms. How you feel about it will probably depend on how you feel about python in general – and how you expect upgrades to go in the future.

  • don’t forget you can define PXE config files based on the IP, IP range or MAC address of the server. This means that you don’t have to select the correct pxeboot option from a PXE menu it will select the most precise config file automatically.

    see the following http://www.syslinux.org/wiki/index.php/PXELINUX#Configuration

    also you can pass additional arguments on the pxeboot line so that you can read from your pre and post scripts using /proc/cmdline.

    https://www.redhat.com/promo/summit/2010/presentations/summit/decoding-the-code/wed/cshabazi-530-more/MORE-Kickstart-Tips-and-Tricks.pdf

    Grant

  • Les Mikesell wrote:

    Take a look at Cobbler. I use this to create about 40 servers. Works really well, produces customized kickstarts, has a web GUI
    as well as command line operation, has lots of nice features to get the job done.

  • Is this what you are talking about?

    Available Packages salt.noarch 2014.7.0-3.el6 epel salt-api.noarch 2014.7.0-3.el6 epel salt-cloud.noarch 2014.7.0-3.el6 epel salt-master.noarch 2014.7.0-3.el6 epel salt-minion.noarch 2014.7.0-3.el6 epel salt-ssh.noarch 2014.7.0-3.el6 epel salt-syndic.noarch 2014.7.0-3.el6 epel

  • Ansible, Bcfg2, Chef, Cobbler, Puppet, and Salt; I notice that Spacewalk is not mentioned. Any particular reason that it gets no recommendations?

    What about CFEngine? Any comments on this one?

  • Yes – the architecture is that you run one central salt master and a large number of salt-minions can connect to it and salt-syndic works as sort-of a proxy for even larger sets. It is somewhat cross-platform but with the caveat that the master should be updated to newer versions ahead of the minions and epel is one of the slower repositories to get updates. I haven’t gone beyond simple testing myself because I think it shouldn’t be more trouble to manage updates/compatibility on a configuration manager than just managing your own app in the first place. But we tend to make few changes beyond version updates once a system is deployed. If you regularly spin totally new clusters up and down all the time I could see how it could save time.

  • I think with Spacewalk you are pretty much committed to not using anything but Fedora or RHEL derivatives – and quite a bit of overhead to get started. I’d prefer infrastructure that is more flexible.

    I haven’t looked at it for a long time, but my impression was that it gave way to much autonomy to each node. In our scheme of things, if a node is not communicating correctly on the network the best thing to happen is for it to die quietly and let the redundant systems fill in, We don’t want it trying to fix itself when things aren’t working as expected.