Scaling Puppet for Donuts

In the last year we’ve had a fair number of challenges within NetOps, especially with our config management of choice which is Puppet. Not only did we have a big piece of work that involved taking the giant leap from Puppet 2.x to 3.x, we also faced some architectural and performance challenges.

Whilst the upgrade was successful, we continued to have an architecture that was vertically scaled, and worse still we had CA signing authority host that had become a snowflake due to manual intervention during our upgrade. The architecture issues then really started to become apparent when we started hitting around 600 client nodes.

Now, as the old saying goes, if you have a problem, and no one else can help, maybe you should open…. a JIRA! So we did and it included the promise of donuts to the person willing to do the following:

1: Puppetise all our puppet infrastructure – inception FTW.
2: Add a level of redundancy and resilience to our Puppet CA signing authority.
3: Get us moved to the latest version of PuppetDB.
4: Make Puppet Dashboard better somehow.
5: Do it all on Debian Wheezy because Debian Squeeze support expires soon.
6: Seamlessly move to the new infrastructure.

What happened next was three weeks of work creating new modules in our Puppet git repo that could sit alongside our current configuration and be ready for the proverbial flip of the switch at the appropriate moment.

After a quick bit of research it became clear that the best approach to take was to separate out our CA signing authority host from our Puppet masters that would serve the vast majority of requests. This would allow us to make the CA resilient, which we achieved through bi-directional syncing of the signed certificates between our primary and failover CA.

Separation also meant that our “worker” cluster could be horizontally scaled on demand, and we estimate we can easily accommodate 2000 client nodes with our new set-up, which looks like this:

puppet

You may be thinking at this point that PuppetDB is an anomaly because it’s not redundant. However, we took the view that as the reporting data was transient and could potentially change with every puppet run on our agent nodes, we could quite happily take the hit on losing it (temporarily).

Yes we would have to rebuild it, but the data would repopulate once back online. In order to cope with a PuppetDB failure we enabled the “soft_write_failure” option on our Puppet masters, i.e. CA and Worker hosts. This meant that they would still serve in the event of a PuppetDB outage.

Finally, we decided to move away from using the official Puppet Dashboard – which relied on reports and local sql storage – and used Puppetboard Github project instead as it talks directly to PuppetDB. Puppetboard was written using the Flask (Python) web framework and we run it internally fronted with Facebook’s Tornado web server

How bloated is your PostgreSQL database?

When dealing with databases (or, in fact, any data that you need to read from disk), we all know how important it’s to have a lot of memory.  When we’ve a lot of memory, a good portion of data gets nicely cached due to smart operating system caching and most of the data, when requested comes from memory rather then disk which is much, much faster.  Hence trying to keep your dataset size possibly small becomes quite important maintenance task.

One of the things that take quite a bit of space in PostgreSQL which we use across most of the systems here at Mind Candy, are indexes.  And it’s good because they speed up access to data vastly, however they easily get “bloated”, especially if data you store in your tables gets modified often.

However, before we even are able to tackle the actual problem of bloated indexes, first we need to figure out which indexes are bloated.  There’re some tricky SQL queries that you can run against the database to see the index bloat, but in our experience, results we got were not always accurate (and actually quite far off).

Beside having happy databases we also care a lot about the actual data we store so we back it up very often (nightly backups + PITR backups) and once a day we do a fully automatic database restore to make sure backups we take, work.

Now, a restore operation includes building indexes from scratch, what means, those indexes are fresh and free of bloat.

Now, if we only could compare the sizes of indexes from our production databases to the ones from restored backups, we could easily say, very precisely, how much bloat we’ve got in our production database.  To help with that, we wrote a simple python script.

$ ./indexbloat.py -ps csva.csv csvb.csv
Index idx3 size compare to clean import: 117 % (14.49G vs. 12.35G)
Index idx2 size compare to clean import: 279 % (14.49G vs. 5.18G)
Ough!  idx4 index is missing in the csvb.csv file.  Likely a problem with backup!
Total index bloat: 11.46G

The whole process works as following:

  1. At the time of backup, we run a long SQL query which prints all indexes in production database alongside with their sizes in CSV format
  2. After the backup is restored, we run the same SQL query that prints all indexes in “fresh” database alongside with their sizes in CSV format
  3. We then run the aforementioned Python script which parses both CSV files and prints out human-friendly information show exactly how much bloat we’ve got in our indexes

We also added a percent thresold option so it’ll print out only indexes with bloat more then X %.  This is so we won’t get bothered by little differencies.

The aforementioned script, called pgindexbloat, can be found on Mind Candy Github account.  It’s very easy to be run from cronjob or wrapped around into a check script and used to feed Nagios / Sensu.

As a interesting note, I’ll just add that the first go of the script uncovered we had nearly 40GB worth of bloat on our production database.  Much more then we anticipated and getting rid of that bloat, definitelly made our database much happier.