My opinion is that the only reason the big enterprise storage vendors have gotten away with network block storage for the last decade is that they can afford to over-engineer the hell out of them and have the luxury of running enterprise workloads, which is a code phrase for “consolidated idle workloads.” When the going gets tough in enterprise storage systems, you do capacity planning and make sure your hot apps are on dedicated spindles, controllers, and network ports.
Last week Robert Treat told me it sure would be nice if we could reconstruct PostgreSQL logs from network captures (in the sort of antagonist way that is: "MySQL can do it, why can't we?"). With pgsniff, we can. Well, it turns out that he was complaining for a reason: a client. Our friends over at Etsy have a server that is so blindingly busy selling handmade things that logging all queries on the box degrades performance unacceptably.
I gave a talk at the Percona Performance conference (same time as MySQL, in the same facility... can we say awkward?) about running large PostgreSQL installs. I referred to a few instances in the presentation that are a handful of terabytes in size. In today's world, these aren't that large, however we do pretty deep analytics on these installs. It is most definitely not a case of store and forget.
For those that are really familiar with running Oracle on production systems, you realize that it has issues (as do all databases) but there are certain things that you can ask of your database that after a while you start to take for granted. Particularly, all of the statistics and introspection tools that you can use to see what is really going on in the system. For a long time, I've used TOra for that.
I just gave a talk at OSCON on PostgreSQL. Specifically, it is a talk on migrating a portion of a large data architecture from Oracle to PostgreSQL. It is an honest assessment of what was gained and the pains involved. Several people have requested the talk be put online. Here it is: