I spent a few years writing down my thoughts about how one should approach problems. If you're looking for a how-to guide, a cookbook, or a reference this book is not for you. If you want to learn by challenging the way you think, pick up a copy.
It has ample criticism (all deserved) and we’re continuing to work on the “consumability” aspects of the product. One of the comments he made was the interesting choice of Lua as a language to extend the product. I really wanted to use Perl or Python to extend Reconnoiter, but Lua has some pretty magical properties that made it meld well with the noit internals. In my opinion it is perhaps one of the more technically interesting parts of the product as the search was long and the choice reluctant. When I have more time, I’ll write an article about embedding lua in noit and the deeper reasoning behind it.
Also, don’t be scared by Lua, you can always write extensions in C! Although it doesn’t meld well with its high performance design, there is an extproc module that allows writing checks as standalone scripts and operates fine with existing Nagios checks — I say I wrote it to make adoption a bit easier, but in truth Eric and Mark made me do it.
I gave a talk at the Percona Performance conference (same time as MySQL, in the same facility... can we say awkward?) about running large PostgreSQL installs. I referred to a few instances in the presentation that are a handful of terabytes in size. In today's world, these aren't that large, however we do pretty deep analytics on these installs. It is most definitely not a case of store and forget.
A few people came up and said: "I thought you were going to talk about big... a terabyte is not big." I would rebut that with it's not how big it is, it is what you do with it, but then I would be on the defensive. The truth of the matter is that it is a combination of things. Size matters: below a certain threshold, you simple can't call it large. Usage matters: if you don't do something interesting with the data, you might as well be throwing it away. While most of the PostgreSQL instances we deal with were considered larger by 2005 standards, a terabyte simply no longer meets the bare minimum for "large."
I'm excited that we're looking to launch a new service that will turn the tables on large for PostgreSQL. We very well could have 10 petabytes in postgres in no time if things go as planned. Not only will we meet the size requirements, we'll also be doing lots of interesting things with the data. Big Bad PostgreSQL indeed.