A few months back I was tasked with deploying Uptime which is a remote monitoring application using Node.js, MongoDB, and Twitter Bootstrap. The reason behind wanting to use Uptime was to gain a greater level of data around the uptime of our internal infrastructure and systems for retrospective viewing.
The great thing about this task was that it gave me the opportunity to build it all via Puppet and really understand the workings and best practises around Puppet.
We started off with the essentials which was creating a node manifest for the Uptime server and then looked in Puppet Forge for relevant modules that we didn’t have already in our repo. Once we had found the necessary modules we included them in (such as MongoDB and NVM) and then looked to grab the Uptime repo from GitHub.
With all the necessaries installed we could then look to configure the app via a yaml file and drop it in. To further automate the application we used a pre-existing service script and modified it to startup the app on boot. At this point we now have a working app that is accessible and useable, however we wanted to apply some form of authentication to access the application.
For this layer of authentication we decided to use Apache along with the ldap, proxy and ssl modules to then utilise our current LDAP and provide encryption. We installed Apache and configured an uptime vhost with a proxy to the box locally as we have set Uptime to be accessed via localhost only. Once the config file was dropped onto the box and Apache was running all, requests were being redirected to the app via localhost over ssl, and then being prompted for their credentials before accessing the Uptime dashboard.
While the above is terribly simple it was a fantastic opportunity for myself to learn about Puppet, automation and best practises. There’s a few things I learnt or ideas that were reinforced during the whole process, so take a step back guys and girls as I’m about to drop some knowledge:
Not invented here syndrome is an issue we’ve all faced before, whether it’s ourselves or a colleague of yours. We’ve all been there, sucking air through our gritted teeth while muttering “I wouldn’t have done it that way”. While there may be a case for completely re-doing something, in the majority of situations it’s just not necessary.
For this particular project the NIHS came into module usage as we have plenty of modules (self made and community made) at our disposal. Instead of creating my own or making a rather convoluted manifest/module I decided to go with ones we already use or popular community written ones, which helped speed everything up and hopefully lessen any potential tech debt.
While you want to automate everything as much as possible not everything is worth ripping your hair out over and therefore wasting time. From bringing up a new VM and Puppetising it, the current node manifest will automate the entire installation and setup of the application except for the installation of the app dependencies. Even Though I tried to automate this, it became a blocker and while it is most definitely a solvable solution I decided not to concentrate on this.
My reason being is that it would only have to be run the once and this would only be during the initial installation of the app. There is no reason for large amounts of time to be spent on this when it’s only a task that is to be done the once, and in the event of a rebuild, the hours spent solving this compared to the 5 minutes doing this quite simply doesn’t match up. Automation is there to remedy laborious tasks and free up time, not soak up your time even further.
Aside from the automation aspect of Puppet it’s the documentation you receive from it as well. By writing out my node manifest accompanied with notes, any member of the team can look at my code and figure out exactly how it’s configured and what each bit does. This also helps with any debugging and, if you were to look back at this in future remember how you pieced it together.
When things are over engineered they become harder to pick apart when problems arise and typically are more prone to to go wrong (from my personal experience). By keeping your manifests simple and modular you can chop and change bits out of it without breaking the entire thing. My personal opinion is that simpler is better, as there is less to go wrong. That is not to say that complexities are avoidable but try and keep them down to a minimum.
There are plenty of modules and manifests that have been written by my colleagues which I tend to delve into to re-use snippets of code. As mentioned previously, NIHS just isn’t necessary as these previously written working bits of code can be used for whatever you’re doing. It will save you time and stress, as someone’s done the legwork for you. Don’t be too proud.
When I was writing the manifest I used Vagrant to allow me to test my changes locally without constantly pushing to live. It gave me the opportunity to trash and rebuild the box within minutes and test the automation side of the manifest. I was able to make quick and drastic changes without any risk of upsetting the live puppet repo. Any changes I needed to make I could verify them within minutes. For me it’s an invaluable tool and it’s my go to software to safely test.
Over the past few years I have been exposed to Puppet, but never really delved that deeply into it. Here’s some top level Pro’s and Con’s I’ve cobbled together out of my experience so far:
- Puppet allows for automation of your nodes
- There is a Puppet large community due to the market share and customers base and the level of documentation and support is vast.
- The ability gain documentation through configuration is fantastic.
- Securely store and transfer passwords via hiera
- Puppet supports many platforms out of the box
- Open source
- Supports both Puppet and pure Ruby when writing your manifests and modules.
- Clear and understandable errors on Puppet run fails for debugging.
- Introducing new OS’s to if you haven’t allowed for OS agnosticism can be a nuisance if you haven’t initially written manifests/modules with this in mind.
- Repos can become messy overtime but that comes down to housekeeping more than Puppet
- Shouldn’t use Puppet for large file transfers
- Mismatch of Ruby and Gem versions can be a colossal pain to fix especially in terms of Mac OS X agents.
- Without version control you cannot see what was previously applied to the box.
Even though my time with Puppet and configuration management/automation software has been limited, I’ve now caught the bug and want to automate all the things. If you haven’t looked into Puppet or it’s cousins I’d definitely recommend it. Puppet won’t solve all your problems but it’s a good start.
Open source is the best source
As always this wouldn’t have been possible without the fantastic open source community.