Puppet 3.0 Upgrade Experience
So on Monday in the middle of the day all of my systems upgraded to 3.0. Before I go to far into this blog I want to explain that this was my fault. The puppet module that controls the version of puppet which is installed on each of my clients is set to “ensure => latest”. Obviously this forced an upgrade of my entire environment. This was very much a lack of up front thinking on my part. That being said I’d like to point out a few things other users might encounter when upgrading.
Puppetmaster Upgrade Issues
So I manage a few puppet masters each living in their own AWS VPC. All of my puppetmasters upgraded the client packages to 3.0 as well as the puppetmaster server packages. This all went very soothe on each of the puppetmasters except for one. One one of my masters I had the puppet gem installed and apparently when passenger tries to start it attempts to load the related gems. This caused me a bit of frustration because I was unable to figure out what the issue was. After you upgrade to puppet 3.0 if your puppetmaster doesn’t start do this:
sudo gem list
If you see the puppet gem and it hasn’t been upgraded to 3.0 do the following:
sudo gem install puppet
This will upgrade your gem to the 3.0 version and your puppetmaster daemon will start just fine.
Apply Puppet Manifest To Puppet Clients
So one of the first things I noticed is that puppetd is now gone. I know that there has been discussion about puppetd being deprecated but I didn’t take the time to change my automation scripts with the update. If you which to apply the current manifest to your clients you can no longer use “puppetd –test” or “puppet kick” you need to do the following:
sudo puppet agent -t
Puppet CA Changes
Next, I had to figure out how to remove SSL certificates from the puppetmaster. Since I manage our EC2 environments we are deploying and undeploying instances all the time. Part of our undeploy automation is to remove the signed SSL certificate from the puppetmaster for the instance that is being undeployed. Not that we redeploy instances with the same name often but when I am testing automation tasks there are times I just redeploy the same instance with the same name over and over.
Previously I used:
puppetca --clean
Now I have to use:
puppet ca destroy
Listing all of the systems which have signed SSL certificates has changed as well, the old was was:
puppetca --list --all
The new way is:
puppet cert list --all
In The Future
I know many of you know this already but for those of you that don’t make sure in your puppet module you don’t have:
package { puppet: ensure => latest, require => File["puppetlabs-list"] }
Instead you have
package { puppet: ensure => "3.0.0-1puppetlabs1", require => File["puppetlabs-list"] }
Or what ever puppet version you would like. And make sure you test the upgrade of your puppet clients before your server. I only had to spend a few moments changing some automation due to this issue but it could have been a lot more catastrophic.
We ran into similar problems with upgrading puppet to 3.0.0 using the latest version as well. Ours happened on the last day of PuppetConf SF. Luckily, my boss and I were at work when it happened. Here is what we ran into.
* Had to update the config.ru file for our nginx/passenger setup on the puppetmaster (upgraded it on 3.0.0 and again on 3.0.1)
* Had to update all of our sources from puppet:////path/to/file to puppet:///modules//path/to/file. This took a while to diagnose. Basically, run the puppetmaster in –no-daemonize –debug mode and run a client against it.
* Remove all 2.x versions of puppet on all puppet clients and puppetmaster servers. Somehow they got mixed up when multiple versions were installed.
Currently we are very happy with the new version of puppet. And we were able to upgrade our system ruby to the latest 1.9.3.